Feb 18 11:36:36 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 11:36:37 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:37 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:36:38 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 11:36:38 crc kubenswrapper[4922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 11:36:38 crc kubenswrapper[4922]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 11:36:38 crc kubenswrapper[4922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 11:36:38 crc kubenswrapper[4922]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 11:36:38 crc kubenswrapper[4922]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 11:36:38 crc kubenswrapper[4922]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.710793 4922 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716320 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716344 4922 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716351 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716376 4922 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716384 4922 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716390 4922 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716397 4922 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716405 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716411 4922 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716417 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716424 4922 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716433 4922 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716442 4922 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716449 4922 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716457 4922 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716462 4922 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716468 4922 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716474 4922 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716479 4922 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716485 4922 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716490 4922 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716496 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716502 4922 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716507 4922 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716514 4922 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716521 4922 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716527 4922 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716533 4922 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716539 4922 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716544 4922 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716550 4922 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716557 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716564 4922 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716570 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716576 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716582 4922 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716588 4922 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716593 4922 feature_gate.go:330] unrecognized feature gate: Example Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716598 4922 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716604 4922 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716609 4922 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716615 4922 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716622 4922 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716627 4922 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716632 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716637 4922 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716643 4922 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716648 4922 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716653 4922 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716658 4922 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716663 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716668 4922 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716674 4922 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716679 4922 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716684 4922 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716689 4922 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716694 4922 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716699 4922 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716704 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716709 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716714 4922 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716719 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716724 4922 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716729 4922 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716736 4922 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716741 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716747 4922 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716752 4922 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716757 4922 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716762 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.716767 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716875 4922 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716888 4922 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716899 4922 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716907 4922 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716916 4922 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716922 4922 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716931 4922 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716940 4922 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716947 4922 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716953 4922 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716960 4922 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716966 4922 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716973 4922 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716979 4922 flags.go:64] FLAG: --cgroup-root="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716986 4922 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716992 4922 flags.go:64] FLAG: --client-ca-file="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.716998 4922 flags.go:64] FLAG: --cloud-config="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717004 4922 flags.go:64] FLAG: --cloud-provider="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717010 4922 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717018 4922 flags.go:64] FLAG: --cluster-domain="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717024 4922 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717030 4922 flags.go:64] FLAG: --config-dir="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717036 4922 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717043 4922 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717052 4922 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717058 4922 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717065 4922 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717071 4922 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717078 4922 flags.go:64] FLAG: --contention-profiling="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717083 4922 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717090 4922 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717096 4922 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717105 4922 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717114 4922 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717120 4922 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717127 4922 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717133 4922 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717139 4922 flags.go:64] FLAG: --enable-server="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717146 4922 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717155 4922 flags.go:64] FLAG: --event-burst="100" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717162 4922 flags.go:64] FLAG: --event-qps="50" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717170 4922 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717177 4922 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717183 4922 flags.go:64] FLAG: --eviction-hard="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717191 4922 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717198 4922 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717204 4922 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717210 4922 flags.go:64] FLAG: --eviction-soft="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717216 4922 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717222 4922 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717237 4922 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717243 4922 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717249 4922 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717255 4922 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717261 4922 flags.go:64] FLAG: --feature-gates="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717297 4922 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717303 4922 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717310 4922 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717316 4922 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717322 4922 flags.go:64] FLAG: --healthz-port="10248" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717329 4922 flags.go:64] FLAG: --help="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717334 4922 flags.go:64] FLAG: --hostname-override="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717340 4922 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717346 4922 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717353 4922 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717378 4922 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717386 4922 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717392 4922 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717398 4922 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717404 4922 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717410 4922 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717416 4922 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717422 4922 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717428 4922 flags.go:64] FLAG: --kube-reserved="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717434 4922 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717440 4922 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717447 4922 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717453 4922 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717459 4922 flags.go:64] FLAG: --lock-file="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717465 4922 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717471 4922 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717477 4922 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717486 4922 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717493 4922 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717499 4922 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717505 4922 flags.go:64] FLAG: --logging-format="text" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717512 4922 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717519 4922 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717525 4922 flags.go:64] FLAG: --manifest-url="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717531 4922 flags.go:64] FLAG: --manifest-url-header="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717539 4922 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717546 4922 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717554 4922 flags.go:64] FLAG: --max-pods="110" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717560 4922 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717566 4922 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717573 4922 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717579 4922 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717586 4922 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717592 4922 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717598 4922 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717612 4922 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717618 4922 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717625 4922 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717631 4922 flags.go:64] FLAG: --pod-cidr="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717637 4922 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717646 4922 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717652 4922 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717658 4922 flags.go:64] FLAG: --pods-per-core="0" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717664 4922 flags.go:64] FLAG: --port="10250" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717670 4922 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717676 4922 flags.go:64] FLAG: --provider-id="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717682 4922 flags.go:64] FLAG: --qos-reserved="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717688 4922 flags.go:64] FLAG: --read-only-port="10255" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717694 4922 flags.go:64] FLAG: --register-node="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717700 4922 flags.go:64] FLAG: --register-schedulable="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717706 4922 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717717 4922 flags.go:64] FLAG: --registry-burst="10" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717723 4922 flags.go:64] FLAG: --registry-qps="5" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717728 4922 flags.go:64] FLAG: --reserved-cpus="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717734 4922 flags.go:64] FLAG: --reserved-memory="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717742 4922 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717749 4922 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717755 4922 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717761 4922 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717767 4922 flags.go:64] FLAG: --runonce="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717773 4922 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717779 4922 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717785 4922 flags.go:64] FLAG: --seccomp-default="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717791 4922 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717798 4922 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717804 4922 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717811 4922 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717817 4922 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717823 4922 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717829 4922 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717835 4922 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717841 4922 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717848 4922 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717854 4922 flags.go:64] FLAG: --system-cgroups="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717860 4922 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717869 4922 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717875 4922 flags.go:64] FLAG: --tls-cert-file="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717882 4922 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717889 4922 flags.go:64] FLAG: --tls-min-version="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717895 4922 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717901 4922 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717907 4922 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717913 4922 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717919 4922 flags.go:64] FLAG: --v="2" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717927 4922 flags.go:64] FLAG: --version="false" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717935 4922 flags.go:64] FLAG: --vmodule="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717942 4922 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.717949 4922 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718086 4922 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718093 4922 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718099 4922 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718105 4922 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718111 4922 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718117 4922 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718122 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718128 4922 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718135 4922 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718141 4922 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718147 4922 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718152 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718157 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718164 4922 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718170 4922 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718175 4922 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718180 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718187 4922 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718193 4922 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718199 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718204 4922 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718211 4922 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718216 4922 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718222 4922 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718227 4922 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718232 4922 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718238 4922 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718244 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718249 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718255 4922 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718260 4922 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718266 4922 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718272 4922 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718278 4922 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718284 4922 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718289 4922 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718295 4922 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718301 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718306 4922 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718312 4922 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718319 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718324 4922 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718330 4922 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718335 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718342 4922 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718349 4922 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718355 4922 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718376 4922 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718382 4922 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718387 4922 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718392 4922 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718397 4922 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718403 4922 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718408 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718413 4922 feature_gate.go:330] unrecognized feature gate: Example Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718418 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718423 4922 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718430 4922 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718436 4922 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718441 4922 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718446 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718451 4922 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718456 4922 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718462 4922 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718467 4922 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718472 4922 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718477 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718482 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718489 4922 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718496 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.718501 4922 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.718517 4922 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.729837 4922 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.729888 4922 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730002 4922 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730013 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730017 4922 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730023 4922 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730032 4922 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730038 4922 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730042 4922 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730046 4922 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730050 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730069 4922 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730073 4922 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730077 4922 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730081 4922 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730085 4922 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730088 4922 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730092 4922 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730096 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730099 4922 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730103 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730106 4922 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730111 4922 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730115 4922 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730120 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730123 4922 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730127 4922 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730131 4922 feature_gate.go:330] unrecognized feature gate: Example Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730148 4922 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730152 4922 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730155 4922 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730159 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730162 4922 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730168 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730172 4922 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730175 4922 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730180 4922 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730184 4922 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730188 4922 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730191 4922 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730195 4922 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730198 4922 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730202 4922 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730206 4922 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730223 4922 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730227 4922 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730230 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730234 4922 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730237 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730241 4922 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730244 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730248 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730251 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730255 4922 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730259 4922 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730264 4922 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730271 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730275 4922 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730279 4922 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730283 4922 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730299 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730303 4922 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730307 4922 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730311 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730314 4922 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730320 4922 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730324 4922 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730328 4922 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730332 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730337 4922 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730342 4922 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730346 4922 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730351 4922 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.730375 4922 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730541 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730551 4922 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730556 4922 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730560 4922 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730564 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730568 4922 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730571 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730575 4922 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730579 4922 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730583 4922 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730587 4922 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730607 4922 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730611 4922 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730615 4922 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730619 4922 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730623 4922 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730627 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730631 4922 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730634 4922 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730639 4922 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730644 4922 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730649 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730653 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730657 4922 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730662 4922 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730666 4922 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730685 4922 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730689 4922 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730693 4922 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730697 4922 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730701 4922 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730704 4922 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730708 4922 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730712 4922 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730716 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730720 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730724 4922 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730728 4922 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730733 4922 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730738 4922 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730742 4922 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730746 4922 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730765 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730769 4922 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730774 4922 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730779 4922 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730784 4922 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730788 4922 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730793 4922 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730797 4922 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730800 4922 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730804 4922 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730809 4922 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730813 4922 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730816 4922 feature_gate.go:330] unrecognized feature gate: Example Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730820 4922 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730823 4922 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730841 4922 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730845 4922 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730849 4922 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730853 4922 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730857 4922 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730861 4922 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730865 4922 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730868 4922 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730872 4922 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730876 4922 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730879 4922 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730883 4922 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730887 4922 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.730891 4922 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.730897 4922 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.732036 4922 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.736384 4922 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.736493 4922 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.737738 4922 server.go:997] "Starting client certificate rotation" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.737761 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.740943 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 23:41:09.339369437 +0000 UTC Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.741440 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.772437 4922 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.775762 4922 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.777148 4922 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.804232 4922 log.go:25] "Validated CRI v1 runtime API" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.836486 4922 log.go:25] "Validated CRI v1 image API" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.840032 4922 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.846479 4922 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-11-31-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.846718 4922 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.877310 4922 manager.go:217] Machine: {Timestamp:2026-02-18 11:36:38.872813473 +0000 UTC m=+0.600517603 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e6651d54-876d-4ba5-b7d2-813fd96498e1 BootID:142c3684-8991-4ed8-97d2-827c32777413 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f9:29:ed Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f9:29:ed Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d1:04:45 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:73:34:14 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ad:a9:b1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7e:e5:77 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:72:06:31:f0:8b:f8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8a:ae:5e:bb:c9:56 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.877992 4922 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.878276 4922 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.881462 4922 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.881727 4922 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.881784 4922 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.882102 4922 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.882115 4922 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.882642 4922 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.882680 4922 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.882938 4922 state_mem.go:36] "Initialized new in-memory state store" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.883036 4922 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.886576 4922 kubelet.go:418] "Attempting to sync node with API server" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.886601 4922 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.886634 4922 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.886652 4922 kubelet.go:324] "Adding apiserver pod source" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.886676 4922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.894137 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.894650 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.894800 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.894987 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.896263 4922 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.897796 4922 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.899315 4922 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901191 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901224 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901234 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901245 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901301 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901314 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901324 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901340 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901353 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901389 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901414 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.901427 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.902771 4922 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.903510 4922 server.go:1280] "Started kubelet" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.905059 4922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.905217 4922 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.905543 4922 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.906458 4922 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 11:36:38 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.906560 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.906620 4922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.907013 4922 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.907037 4922 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.907192 4922 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.907125 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:02:06.4506587 +0000 UTC Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.907535 4922 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.908121 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.908140 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.908489 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.909390 4922 factory.go:55] Registering systemd factory Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.909424 4922 factory.go:221] Registration of the systemd container factory successfully Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.909923 4922 factory.go:153] Registering CRI-O factory Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.909969 4922 factory.go:221] Registration of the crio container factory successfully Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.910089 4922 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.910124 4922 factory.go:103] Registering Raw factory Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.910150 4922 manager.go:1196] Started watching for new ooms in manager Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.910828 4922 server.go:460] "Adding debug handlers to kubelet server" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.911064 4922 manager.go:319] Starting recovery of all containers Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.910273 4922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189554313d95072c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 11:36:38.903465772 +0000 UTC m=+0.631169872,LastTimestamp:2026-02-18 11:36:38.903465772 +0000 UTC m=+0.631169872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.924802 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.924946 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.924974 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.924996 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925019 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925042 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925101 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925121 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925145 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925169 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925195 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925222 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925244 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925317 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925345 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925415 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925484 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925646 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925671 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925693 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925868 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925904 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925936 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925965 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.925988 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926014 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926045 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926070 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926101 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926123 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926149 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926172 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926196 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926218 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926246 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926268 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926292 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926312 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926336 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926356 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926410 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926433 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926455 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926479 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926536 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926560 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926610 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926642 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926666 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926689 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926715 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926740 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926773 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926798 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926863 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926894 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926970 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.926995 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927017 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927041 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927065 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927091 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927124 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927150 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927174 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927195 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927218 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927239 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927264 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927288 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927310 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927333 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927356 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927409 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927438 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927459 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927481 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927502 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927528 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927550 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927572 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927596 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927618 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927638 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927661 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927682 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927705 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927725 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927756 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927784 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927812 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927834 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927855 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927877 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927901 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927922 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927947 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927968 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.927991 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928015 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928038 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928059 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928081 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928104 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928135 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928166 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928203 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928226 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928249 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928276 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928303 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928325 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928347 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928396 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928421 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928457 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928503 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928533 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928556 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928577 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928599 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928619 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928640 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928662 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928689 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928710 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928733 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928754 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928774 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928795 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928828 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928851 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928872 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928894 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928915 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928975 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.928995 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929018 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929046 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929069 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929090 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929111 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929134 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929157 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929178 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929209 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929231 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929253 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929311 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929353 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929398 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929423 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929447 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929477 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929509 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929534 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929554 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929575 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929602 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929623 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929644 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929666 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929691 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929713 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929735 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929758 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929781 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929802 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929824 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929846 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929869 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929890 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929911 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929932 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929955 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.929978 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930011 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930033 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930054 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930074 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930098 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930119 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930145 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930173 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930203 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930232 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930255 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930277 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930300 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930321 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930345 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930466 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930489 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930514 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930545 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.930572 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.932819 4922 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.932873 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.932899 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.932924 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.932958 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.932983 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.933007 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.933032 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.933065 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.933088 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.933113 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.933135 4922 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.933155 4922 reconstruct.go:97] "Volume reconstruction finished" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.933170 4922 reconciler.go:26] "Reconciler: start to sync state" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.939475 4922 manager.go:324] Recovery completed Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.953877 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.956892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.956980 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.956998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.957906 4922 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.958014 4922 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.958109 4922 state_mem.go:36] "Initialized new in-memory state store" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.969061 4922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.971674 4922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.971736 4922 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.971781 4922 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.971979 4922 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 11:36:38 crc kubenswrapper[4922]: W0218 11:36:38.973721 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:38 crc kubenswrapper[4922]: E0218 11:36:38.973853 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.978559 4922 policy_none.go:49] "None policy: Start" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.979696 4922 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 11:36:38 crc kubenswrapper[4922]: I0218 11:36:38.979758 4922 state_mem.go:35] "Initializing new in-memory state store" Feb 18 11:36:39 crc kubenswrapper[4922]: E0218 11:36:39.007698 4922 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.038610 4922 manager.go:334] "Starting Device Plugin manager" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.038675 4922 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.038689 4922 server.go:79] "Starting device plugin registration server" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.039174 4922 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.039191 4922 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.040456 4922 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.040607 4922 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.040618 4922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 11:36:39 crc kubenswrapper[4922]: E0218 11:36:39.052548 4922 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.072830 4922 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.072986 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.074689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.074726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.074740 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.074902 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.075530 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.075573 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.075948 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.076015 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.076037 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.076308 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.076313 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.076400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.076413 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.076425 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.076463 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.080073 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.080136 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.080160 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.080728 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.080950 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.081290 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.082275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.082302 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.082315 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.082508 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.082569 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.082588 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.082839 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.082967 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.083015 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.083842 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.083899 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.083927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.083919 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.083976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.083988 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.084324 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.084382 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.084541 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.084586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.084596 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.085331 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.085375 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.085386 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: E0218 11:36:39.108977 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.136914 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.136980 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137022 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137051 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137080 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137112 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137138 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137166 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137194 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137348 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137406 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137441 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137465 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137488 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.137522 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.140112 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.141229 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.141308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.141320 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.141352 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:36:39 crc kubenswrapper[4922]: E0218 11:36:39.141928 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239232 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239303 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239332 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239354 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239386 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239409 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239429 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239449 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239474 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239493 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239511 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239501 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239559 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239587 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239535 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239569 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239596 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239653 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239589 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239713 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239516 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239663 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239843 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239653 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239878 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239687 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239956 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239918 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239693 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.239661 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.342744 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.345076 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.345135 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.345149 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.345180 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:36:39 crc kubenswrapper[4922]: E0218 11:36:39.345870 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.405697 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.422309 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.429880 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.448261 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.452845 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:39 crc kubenswrapper[4922]: W0218 11:36:39.470388 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f5e7e409d54c4053e9bc58aeb23a7e94fdfd0d7046551b1725bc72f59ce156f0 WatchSource:0}: Error finding container f5e7e409d54c4053e9bc58aeb23a7e94fdfd0d7046551b1725bc72f59ce156f0: Status 404 returned error can't find the container with id f5e7e409d54c4053e9bc58aeb23a7e94fdfd0d7046551b1725bc72f59ce156f0 Feb 18 11:36:39 crc kubenswrapper[4922]: W0218 11:36:39.471858 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-706d923d21b2e0e838fc5486b7d5734806bfe3a7ff5fc9c8e2da7d66f81c4fe1 WatchSource:0}: Error finding container 706d923d21b2e0e838fc5486b7d5734806bfe3a7ff5fc9c8e2da7d66f81c4fe1: Status 404 returned error can't find the container with id 706d923d21b2e0e838fc5486b7d5734806bfe3a7ff5fc9c8e2da7d66f81c4fe1 Feb 18 11:36:39 crc kubenswrapper[4922]: W0218 11:36:39.478691 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6ba071ae1817cd19cd72801323844f239d5506cbbf9675c46b4ccec3727620b1 WatchSource:0}: Error finding container 6ba071ae1817cd19cd72801323844f239d5506cbbf9675c46b4ccec3727620b1: Status 404 returned error can't find the container with id 6ba071ae1817cd19cd72801323844f239d5506cbbf9675c46b4ccec3727620b1 Feb 18 11:36:39 crc kubenswrapper[4922]: E0218 11:36:39.515089 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.746280 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.748061 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.748115 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.748128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.748164 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:36:39 crc kubenswrapper[4922]: E0218 11:36:39.748771 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.907288 4922 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.907718 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:33:15.922199955 +0000 UTC Feb 18 11:36:39 crc kubenswrapper[4922]: W0218 11:36:39.951875 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:39 crc kubenswrapper[4922]: E0218 11:36:39.952001 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.977879 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c223fd174ef908535abcb97089b26d3651c36a9a3ccd3d4651a76c0cd07b56e0"} Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.978863 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"502e579df3404cb95fd7a54a08b5314f02f3daaaf290711e5bbe0cb08c999b97"} Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.979910 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6ba071ae1817cd19cd72801323844f239d5506cbbf9675c46b4ccec3727620b1"} Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.981033 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"706d923d21b2e0e838fc5486b7d5734806bfe3a7ff5fc9c8e2da7d66f81c4fe1"} Feb 18 11:36:39 crc kubenswrapper[4922]: I0218 11:36:39.982048 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f5e7e409d54c4053e9bc58aeb23a7e94fdfd0d7046551b1725bc72f59ce156f0"} Feb 18 11:36:40 crc kubenswrapper[4922]: W0218 11:36:40.060246 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:40 crc kubenswrapper[4922]: E0218 11:36:40.060353 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:40 crc kubenswrapper[4922]: E0218 11:36:40.317062 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Feb 18 11:36:40 crc kubenswrapper[4922]: W0218 11:36:40.379513 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:40 crc kubenswrapper[4922]: E0218 11:36:40.379634 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:40 crc kubenswrapper[4922]: W0218 11:36:40.485405 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:40 crc kubenswrapper[4922]: E0218 11:36:40.485550 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.549282 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.550900 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.550941 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.550952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.550980 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:36:40 crc kubenswrapper[4922]: E0218 11:36:40.551498 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.818447 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 11:36:40 crc kubenswrapper[4922]: E0218 11:36:40.820454 4922 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.906915 4922 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.907874 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:20:05.799849586 +0000 UTC Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.987383 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88"} Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.987435 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6"} Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.987446 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8"} Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.987459 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3"} Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.987531 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.989224 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e" exitCode=0 Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.989317 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e"} Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.989405 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.989790 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.989829 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.989841 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.990802 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.990892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.990920 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.991494 4922 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d" exitCode=0 Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.991549 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d"} Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.991574 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.992814 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.994283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.994420 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.997281 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.999288 4922 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="32c42a81cb689c2599fc99a29686dcfa4beb5434da7149bbe8ca19d545a579bb" exitCode=0 Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.999415 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"32c42a81cb689c2599fc99a29686dcfa4beb5434da7149bbe8ca19d545a579bb"} Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.999461 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.999512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.999553 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:40 crc kubenswrapper[4922]: I0218 11:36:40.999571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.001101 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.001378 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.001479 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.002586 4922 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727" exitCode=0 Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.002661 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727"} Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.002752 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.003688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.003727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.003744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.907300 4922 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 18 11:36:41 crc kubenswrapper[4922]: I0218 11:36:41.908336 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:20:39.618857509 +0000 UTC Feb 18 11:36:41 crc kubenswrapper[4922]: E0218 11:36:41.919524 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.006902 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.006985 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.006988 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.006999 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.008329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.008387 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.008398 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.013697 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3fcb9209bee7db64487a386ac170c5349cab80d8924d502f2325cab2d19761b1"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.013751 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.013765 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.013776 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.013778 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.013786 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.014509 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.014540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.014549 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.016573 4922 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd" exitCode=0 Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.016695 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.016719 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.017481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.017529 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.017543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.018630 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bc50b682470a4987df420a64c6ec491b7137229551303ea861e8c4c037cba371"} Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.018675 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.018690 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.019537 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.019569 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.019540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.019604 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.019616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.019582 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.152508 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.153921 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.153969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.153983 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.154014 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:36:42 crc kubenswrapper[4922]: E0218 11:36:42.154539 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.355073 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.772705 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:42 crc kubenswrapper[4922]: I0218 11:36:42.908829 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:44:15.018260875 +0000 UTC Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.024122 4922 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f" exitCode=0 Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.024245 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.024262 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.024266 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f"} Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.024301 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.024394 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.024250 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025023 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025048 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025653 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025674 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025699 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.025753 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.026512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.026530 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.026558 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.026564 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.026580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.026589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:43 crc kubenswrapper[4922]: I0218 11:36:43.909970 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:26:11.528031865 +0000 UTC Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.030643 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2"} Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.030752 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913"} Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.030779 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24"} Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.030684 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.030802 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c"} Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.030854 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.031751 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.031805 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.031818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:44 crc kubenswrapper[4922]: I0218 11:36:44.910856 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:43:56.907522797 +0000 UTC Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.041984 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8"} Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.042159 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.043459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.043501 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.043515 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.110185 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.110551 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.112069 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.112123 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.112137 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.186947 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.355254 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.357327 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.357432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.357446 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.357487 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:36:45 crc kubenswrapper[4922]: I0218 11:36:45.911197 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:22:04.295861442 +0000 UTC Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.044724 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.046355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.046446 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.046461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.151462 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.151782 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.153609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.153703 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.153733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.196712 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.197037 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.197115 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.199408 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.199487 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.199515 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.911957 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:11:58.842981948 +0000 UTC Feb 18 11:36:46 crc kubenswrapper[4922]: I0218 11:36:46.960643 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:47 crc kubenswrapper[4922]: I0218 11:36:47.048184 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:47 crc kubenswrapper[4922]: I0218 11:36:47.049464 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:47 crc kubenswrapper[4922]: I0218 11:36:47.049522 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:47 crc kubenswrapper[4922]: I0218 11:36:47.049536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:47 crc kubenswrapper[4922]: I0218 11:36:47.913093 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:53:32.188571626 +0000 UTC Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.110461 4922 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.110586 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.533226 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.533536 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.535084 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.535131 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.535147 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.914208 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:24:13.458931235 +0000 UTC Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.964646 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.964857 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.966122 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.966323 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:48 crc kubenswrapper[4922]: I0218 11:36:48.966504 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.014102 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.031699 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:49 crc kubenswrapper[4922]: E0218 11:36:49.052708 4922 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.053004 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.054298 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.054343 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.054386 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.201536 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.201816 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.203648 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.203708 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.203732 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:49 crc kubenswrapper[4922]: I0218 11:36:49.915942 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:19:53.538649041 +0000 UTC Feb 18 11:36:50 crc kubenswrapper[4922]: I0218 11:36:50.055166 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:50 crc kubenswrapper[4922]: I0218 11:36:50.056631 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:50 crc kubenswrapper[4922]: I0218 11:36:50.056757 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:50 crc kubenswrapper[4922]: I0218 11:36:50.056782 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:50 crc kubenswrapper[4922]: I0218 11:36:50.060973 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:36:50 crc kubenswrapper[4922]: I0218 11:36:50.916916 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 09:14:40.414741224 +0000 UTC Feb 18 11:36:51 crc kubenswrapper[4922]: I0218 11:36:51.058913 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:51 crc kubenswrapper[4922]: I0218 11:36:51.059982 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:51 crc kubenswrapper[4922]: I0218 11:36:51.060027 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:51 crc kubenswrapper[4922]: I0218 11:36:51.060040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:51 crc kubenswrapper[4922]: I0218 11:36:51.917526 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:48:24.632344913 +0000 UTC Feb 18 11:36:52 crc kubenswrapper[4922]: W0218 11:36:52.684338 4922 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 11:36:52 crc kubenswrapper[4922]: I0218 11:36:52.684489 4922 trace.go:236] Trace[1237687644]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 11:36:42.682) (total time: 10002ms): Feb 18 11:36:52 crc kubenswrapper[4922]: Trace[1237687644]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:36:52.684) Feb 18 11:36:52 crc kubenswrapper[4922]: Trace[1237687644]: [10.00211735s] [10.00211735s] END Feb 18 11:36:52 crc kubenswrapper[4922]: E0218 11:36:52.684548 4922 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 11:36:52 crc kubenswrapper[4922]: I0218 11:36:52.721371 4922 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 11:36:52 crc kubenswrapper[4922]: I0218 11:36:52.721462 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 11:36:52 crc kubenswrapper[4922]: I0218 11:36:52.731790 4922 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 11:36:52 crc kubenswrapper[4922]: I0218 11:36:52.731884 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 11:36:52 crc kubenswrapper[4922]: I0218 11:36:52.918661 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:54:03.92023841 +0000 UTC Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.067079 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.068866 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3fcb9209bee7db64487a386ac170c5349cab80d8924d502f2325cab2d19761b1" exitCode=255 Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.068928 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3fcb9209bee7db64487a386ac170c5349cab80d8924d502f2325cab2d19761b1"} Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.069123 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.070059 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.070118 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.070132 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.070961 4922 scope.go:117] "RemoveContainer" containerID="3fcb9209bee7db64487a386ac170c5349cab80d8924d502f2325cab2d19761b1" Feb 18 11:36:53 crc kubenswrapper[4922]: I0218 11:36:53.919336 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:10:14.334198729 +0000 UTC Feb 18 11:36:54 crc kubenswrapper[4922]: I0218 11:36:54.081317 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 11:36:54 crc kubenswrapper[4922]: I0218 11:36:54.084451 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd"} Feb 18 11:36:54 crc kubenswrapper[4922]: I0218 11:36:54.084677 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:54 crc kubenswrapper[4922]: I0218 11:36:54.085831 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:54 crc kubenswrapper[4922]: I0218 11:36:54.085888 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:54 crc kubenswrapper[4922]: I0218 11:36:54.085907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:54 crc kubenswrapper[4922]: I0218 11:36:54.919972 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:25:10.789566894 +0000 UTC Feb 18 11:36:55 crc kubenswrapper[4922]: I0218 11:36:55.920134 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 05:42:55.532447123 +0000 UTC Feb 18 11:36:56 crc kubenswrapper[4922]: I0218 11:36:56.203809 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:56 crc kubenswrapper[4922]: I0218 11:36:56.204081 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:56 crc kubenswrapper[4922]: I0218 11:36:56.204273 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:56 crc kubenswrapper[4922]: I0218 11:36:56.205694 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:56 crc kubenswrapper[4922]: I0218 11:36:56.205750 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:56 crc kubenswrapper[4922]: I0218 11:36:56.205765 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:56 crc kubenswrapper[4922]: I0218 11:36:56.210344 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:36:56 crc kubenswrapper[4922]: I0218 11:36:56.920308 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:48:43.43199961 +0000 UTC Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.092604 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.094472 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.094575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.094622 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:57 crc kubenswrapper[4922]: E0218 11:36:57.733529 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.736935 4922 trace.go:236] Trace[1637881157]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 11:36:42.826) (total time: 14910ms): Feb 18 11:36:57 crc kubenswrapper[4922]: Trace[1637881157]: ---"Objects listed" error: 14910ms (11:36:57.736) Feb 18 11:36:57 crc kubenswrapper[4922]: Trace[1637881157]: [14.910520267s] [14.910520267s] END Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.736994 4922 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.737858 4922 trace.go:236] Trace[1971315235]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 11:36:43.651) (total time: 14086ms): Feb 18 11:36:57 crc kubenswrapper[4922]: Trace[1971315235]: ---"Objects listed" error: 14086ms (11:36:57.737) Feb 18 11:36:57 crc kubenswrapper[4922]: Trace[1971315235]: [14.086289522s] [14.086289522s] END Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.738139 4922 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.740988 4922 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 11:36:57 crc kubenswrapper[4922]: E0218 11:36:57.742152 4922 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.742251 4922 trace.go:236] Trace[1527444132]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 11:36:42.982) (total time: 14759ms): Feb 18 11:36:57 crc kubenswrapper[4922]: Trace[1527444132]: ---"Objects listed" error: 14759ms (11:36:57.742) Feb 18 11:36:57 crc kubenswrapper[4922]: Trace[1527444132]: [14.759439268s] [14.759439268s] END Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.742724 4922 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.747684 4922 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 11:36:57 crc kubenswrapper[4922]: I0218 11:36:57.920623 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:02:03.331172927 +0000 UTC Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.094998 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.096453 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.096611 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.096721 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.111680 4922 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.111860 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.444271 4922 csr.go:261] certificate signing request csr-k6swn is approved, waiting to be issued Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.469839 4922 csr.go:257] certificate signing request csr-k6swn is issued Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.738110 4922 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 11:36:58 crc kubenswrapper[4922]: W0218 11:36:58.738598 4922 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.921566 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:33:42.871773631 +0000 UTC Feb 18 11:36:58 crc kubenswrapper[4922]: I0218 11:36:58.991866 4922 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.018243 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.032405 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.100763 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.101579 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.103658 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd" exitCode=255 Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.103748 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd"} Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.103843 4922 scope.go:117] "RemoveContainer" containerID="3fcb9209bee7db64487a386ac170c5349cab80d8924d502f2325cab2d19761b1" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.266521 4922 scope.go:117] "RemoveContainer" containerID="4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd" Feb 18 11:36:59 crc kubenswrapper[4922]: E0218 11:36:59.267200 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.471436 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 11:31:58 +0000 UTC, rotation deadline is 2026-12-06 07:05:05.524038344 +0000 UTC Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.471508 4922 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6979h28m6.052533973s for next certificate rotation Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.900714 4922 apiserver.go:52] "Watching apiserver" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.909120 4922 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.909649 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-multus/multus-c9xzd","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-dns/node-resolver-w46bt","openshift-multus/multus-additional-cni-plugins-26zbd","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-image-registry/node-ca-q5qkb","openshift-machine-config-operator/machine-config-daemon-znglx","openshift-ovn-kubernetes/ovnkube-node-wg4r5"] Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.910072 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.910117 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.910096 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:36:59 crc kubenswrapper[4922]: E0218 11:36:59.910208 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:36:59 crc kubenswrapper[4922]: E0218 11:36:59.910405 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.910439 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.910615 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.912043 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.912146 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w46bt" Feb 18 11:36:59 crc kubenswrapper[4922]: E0218 11:36:59.912171 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.912712 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.912824 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c9xzd" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.913635 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.914838 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.914947 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.919885 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.920335 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.920398 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.920575 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.920799 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.920839 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.920956 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921052 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921058 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921090 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921185 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921188 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921390 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921468 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921525 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921585 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921598 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921756 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.921815 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.922445 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:21:49.475083279 +0000 UTC Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.923079 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.923513 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.923924 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.924274 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.924410 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.924698 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.924756 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.924700 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.924957 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.925001 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.925108 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.925145 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.925201 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.925545 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.925705 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.925714 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.936033 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.947082 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.964826 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fcb9209bee7db64487a386ac170c5349cab80d8924d502f2325cab2d19761b1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:52Z\\\",\\\"message\\\":\\\"W0218 11:36:42.053001 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 11:36:42.053399 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771414602 cert, and key in /tmp/serving-cert-1995526895/serving-signer.crt, /tmp/serving-cert-1995526895/serving-signer.key\\\\nI0218 11:36:42.305115 1 observer_polling.go:159] Starting file observer\\\\nW0218 11:36:42.309353 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 11:36:42.309585 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:42.311169 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1995526895/tls.crt::/tmp/serving-cert-1995526895/tls.key\\\\\\\"\\\\nF0218 11:36:52.736229 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.979220 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:36:59 crc kubenswrapper[4922]: I0218 11:36:59.992713 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.004152 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.008604 4922 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.014975 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.022991 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.032489 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.048511 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.054426 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.054712 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.054806 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.054893 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.054973 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055052 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055138 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055227 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055316 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055409 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055493 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055599 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055695 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055779 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055870 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.055957 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.056315 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.057410 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.056556 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.056586 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.056609 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.056590 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.056634 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.056734 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.056838 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.057068 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.057197 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.057487 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.057378 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.057388 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.057926 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058025 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058084 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058108 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058128 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058143 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058524 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058578 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058599 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.058730 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059046 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059212 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059277 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059319 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059352 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059397 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059425 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059451 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059490 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059523 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059552 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059578 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059608 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059637 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059639 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059666 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059694 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059722 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059748 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059795 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059823 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059852 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059858 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059880 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059905 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059927 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059950 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.059951 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060043 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060062 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060074 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060104 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060140 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060138 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060189 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060244 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060268 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060286 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060277 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060298 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060311 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060440 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060502 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060491 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060562 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060604 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060608 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060631 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060681 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060690 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060709 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060733 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060774 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060795 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060850 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060878 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060912 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060967 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061000 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061064 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061314 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061379 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061441 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061464 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061483 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061520 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061559 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061599 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061637 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061695 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061730 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061772 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061795 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061827 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061849 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.060966 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061092 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061344 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061373 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061429 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061451 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061456 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.061919 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.062268 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.062282 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.062552 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.062512 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.062710 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.062737 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.062951 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.062981 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063040 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063062 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063082 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063113 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063123 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063127 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063145 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063170 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063190 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063246 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063265 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063313 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063332 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063350 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063402 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063460 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063578 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063616 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063696 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063722 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063781 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063800 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063817 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064163 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064220 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064250 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064294 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064314 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064455 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064475 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064516 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064550 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064603 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064817 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064927 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.064960 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065111 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065134 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065172 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065302 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065350 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065421 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065441 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065461 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065480 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065500 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065542 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065587 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065607 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065656 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065674 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065690 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065707 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065724 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065767 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065785 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065829 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065855 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066457 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066756 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066802 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066872 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067017 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067048 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067071 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067096 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067132 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067160 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067188 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067219 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067249 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067273 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067296 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067318 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063579 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.063619 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065137 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065281 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.067378 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:37:00.567333471 +0000 UTC m=+22.295037561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065312 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065412 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065424 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065432 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065435 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065583 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065764 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065837 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.065847 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066048 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066225 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066189 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066326 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066372 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066744 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.066762 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067150 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067313 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067684 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.067914 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.068620 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.069002 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.069131 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.069267 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.069305 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.069350 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.069558 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.069935 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.070217 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.070557 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.071134 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.072000 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.072380 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.072822 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.072984 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.073184 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.073689 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.073836 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.073983 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.074191 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075007 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075093 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075186 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075239 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075398 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075611 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075671 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075675 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.075716 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076074 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076200 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076240 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076272 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076276 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076301 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076333 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076376 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076404 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076432 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076481 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076505 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076531 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076554 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076579 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076606 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076631 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076660 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076687 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076712 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076738 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076765 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076790 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076815 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076836 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076869 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076899 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076925 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076952 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.076979 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077008 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077038 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077065 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077097 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077126 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077151 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077188 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077217 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077460 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077488 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077586 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.077627 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/067f44ac-9e60-4581-87cc-f2e1c823fc4c-hosts-file\") pod \"node-resolver-w46bt\" (UID: \"067f44ac-9e60-4581-87cc-f2e1c823fc4c\") " pod="openshift-dns/node-resolver-w46bt" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078061 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-node-log\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078085 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-system-cni-dir\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078107 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-log-socket\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078129 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078150 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-serviceca\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078174 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fdb7cedc-b2e3-48f0-80e0-e17073b43228-proxy-tls\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078197 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-cni-multus\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078220 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45cw\" (UniqueName: \"kubernetes.io/projected/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-kube-api-access-c45cw\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078241 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-config\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078260 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-env-overrides\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078277 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-socket-dir-parent\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078299 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078327 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078351 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-systemd-units\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078389 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-ovn\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078417 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-script-lib\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078458 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-cnibin\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078482 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-cnibin\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078506 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-k8s-cni-cncf-io\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078529 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078551 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fdb7cedc-b2e3-48f0-80e0-e17073b43228-rootfs\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078576 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fdb7cedc-b2e3-48f0-80e0-e17073b43228-mcd-auth-proxy-config\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078599 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-system-cni-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078618 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovn-node-metrics-cert\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078638 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-host\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078667 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078714 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-kubelet\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078742 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079049 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-cni-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079077 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b4595ac-c521-4ada-950d-e1b01cdff99b-cni-binary-copy\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079114 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079145 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-etc-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079171 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079194 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/592c6351-c252-4c19-b3b1-167096be2de9-cni-binary-copy\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079212 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-conf-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079234 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079253 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26p2s\" (UniqueName: \"kubernetes.io/projected/653a41bb-bb1d-421c-a92b-7f2811d95edf-kube-api-access-26p2s\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079274 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-os-release\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079294 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zqx\" (UniqueName: \"kubernetes.io/projected/592c6351-c252-4c19-b3b1-167096be2de9-kube-api-access-55zqx\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079313 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-hostroot\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079336 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-daemon-config\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079397 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079418 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-netns\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079442 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-multus-certs\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079462 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ks8d\" (UniqueName: \"kubernetes.io/projected/067f44ac-9e60-4581-87cc-f2e1c823fc4c-kube-api-access-5ks8d\") pod \"node-resolver-w46bt\" (UID: \"067f44ac-9e60-4581-87cc-f2e1c823fc4c\") " pod="openshift-dns/node-resolver-w46bt" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079480 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/592c6351-c252-4c19-b3b1-167096be2de9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079502 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079519 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-systemd\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079536 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-var-lib-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079554 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-bin\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079571 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-os-release\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079593 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079611 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-slash\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079630 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-netd\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079652 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079670 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-kubelet\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079690 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6rh\" (UniqueName: \"kubernetes.io/projected/9b4595ac-c521-4ada-950d-e1b01cdff99b-kube-api-access-zv6rh\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079708 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-netns\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079727 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079745 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-cni-bin\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079763 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-ovn-kubernetes\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079780 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqgzp\" (UniqueName: \"kubernetes.io/projected/fdb7cedc-b2e3-48f0-80e0-e17073b43228-kube-api-access-mqgzp\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079803 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079821 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-etc-kubernetes\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079932 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079944 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079957 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079974 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079985 4922 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.079997 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080008 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080020 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080030 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080040 4922 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080057 4922 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080071 4922 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080081 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080090 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080104 4922 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080113 4922 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080123 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080133 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080164 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080177 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080192 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080214 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080226 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080237 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080246 4922 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080257 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080266 4922 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080277 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080288 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080298 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080314 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080326 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080340 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080351 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080390 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080408 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080424 4922 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080437 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080449 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080459 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080469 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080481 4922 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080493 4922 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080512 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080527 4922 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080540 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080551 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080562 4922 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080572 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080588 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080598 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080608 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080619 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080628 4922 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080638 4922 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080648 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080657 4922 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080667 4922 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080677 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080691 4922 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080700 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080710 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080720 4922 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080732 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080741 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080753 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080766 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080781 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080793 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080805 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080817 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080830 4922 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080843 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080855 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080870 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080882 4922 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080897 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080909 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080930 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080945 4922 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080958 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080971 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.080984 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081004 4922 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081021 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081034 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081046 4922 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081059 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081071 4922 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081085 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081107 4922 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081120 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081133 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081145 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081156 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081168 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081181 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081193 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081206 4922 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081222 4922 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081235 4922 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081248 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.078480 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.081772 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.082100 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.082322 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.082520 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.083177 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.083265 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.083536 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.083546 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.097716 4922 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.083747 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.083893 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.083947 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.083968 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.082562 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.087717 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.088409 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.097974 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.089287 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.089946 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.098100 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.090688 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.091221 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.092048 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.092569 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.092685 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.093310 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.094015 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.094576 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.097091 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.097340 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.098248 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.098267 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.097556 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.098389 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.098399 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.098384 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:00.598284238 +0000 UTC m=+22.325988538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.097763 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.098512 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:00.598476803 +0000 UTC m=+22.326180883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.098512 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.097795 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.097965 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.088995 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.098420 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.098583 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.097767 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.098630 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:00.598597106 +0000 UTC m=+22.326301186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.098675 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:00.598656097 +0000 UTC m=+22.326360177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.099023 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.099100 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.099116 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.099194 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.100133 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.100630 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.100671 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.101039 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.102167 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.102226 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.102879 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.103853 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.105153 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.105222 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.106146 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.106573 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.106752 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.108659 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.109729 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.114322 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.114668 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.115909 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.116392 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.116502 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.116817 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.117017 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.117025 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.117490 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.117689 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.117789 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.117898 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.118625 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.119185 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.119234 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.119266 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.119342 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.119602 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.119698 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.119990 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.120169 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.120292 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.120295 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.120412 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.121599 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.121401 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.122733 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.122934 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.123516 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.123990 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.124575 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.125171 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.125766 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.126305 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.126478 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.126480 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.126513 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.126717 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.126945 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.127117 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.127826 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.128125 4922 scope.go:117] "RemoveContainer" containerID="4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.128288 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.128703 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.128892 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.136660 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.137288 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.137879 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.138827 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.140093 4922 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.145873 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.153256 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.157667 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.171900 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182338 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-systemd-units\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182384 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-ovn\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182404 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-script-lib\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182421 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-cnibin\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182438 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-cnibin\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182454 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-socket-dir-parent\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182470 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-k8s-cni-cncf-io\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182487 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182505 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fdb7cedc-b2e3-48f0-80e0-e17073b43228-rootfs\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182521 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fdb7cedc-b2e3-48f0-80e0-e17073b43228-mcd-auth-proxy-config\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182535 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-system-cni-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182552 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovn-node-metrics-cert\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182575 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-kubelet\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182596 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-cni-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182612 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b4595ac-c521-4ada-950d-e1b01cdff99b-cni-binary-copy\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182630 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-host\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182648 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-etc-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182662 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182677 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/592c6351-c252-4c19-b3b1-167096be2de9-cni-binary-copy\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-conf-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182707 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26p2s\" (UniqueName: \"kubernetes.io/projected/653a41bb-bb1d-421c-a92b-7f2811d95edf-kube-api-access-26p2s\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182721 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-os-release\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182736 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zqx\" (UniqueName: \"kubernetes.io/projected/592c6351-c252-4c19-b3b1-167096be2de9-kube-api-access-55zqx\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182753 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-hostroot\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182768 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-daemon-config\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182783 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-netns\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-multus-certs\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182815 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ks8d\" (UniqueName: \"kubernetes.io/projected/067f44ac-9e60-4581-87cc-f2e1c823fc4c-kube-api-access-5ks8d\") pod \"node-resolver-w46bt\" (UID: \"067f44ac-9e60-4581-87cc-f2e1c823fc4c\") " pod="openshift-dns/node-resolver-w46bt" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182830 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/592c6351-c252-4c19-b3b1-167096be2de9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182844 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182860 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-systemd\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182877 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-var-lib-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182893 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-bin\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182908 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-os-release\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182923 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-slash\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182937 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-netd\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182972 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-kubelet\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.182991 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6rh\" (UniqueName: \"kubernetes.io/projected/9b4595ac-c521-4ada-950d-e1b01cdff99b-kube-api-access-zv6rh\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183007 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-netns\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183024 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183039 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-cni-bin\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183055 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-ovn-kubernetes\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183085 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqgzp\" (UniqueName: \"kubernetes.io/projected/fdb7cedc-b2e3-48f0-80e0-e17073b43228-kube-api-access-mqgzp\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183108 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-etc-kubernetes\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183125 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/067f44ac-9e60-4581-87cc-f2e1c823fc4c-hosts-file\") pod \"node-resolver-w46bt\" (UID: \"067f44ac-9e60-4581-87cc-f2e1c823fc4c\") " pod="openshift-dns/node-resolver-w46bt" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183141 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-node-log\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183157 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-system-cni-dir\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183176 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-log-socket\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183193 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183209 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-serviceca\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183222 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fdb7cedc-b2e3-48f0-80e0-e17073b43228-proxy-tls\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183237 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-cni-multus\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183252 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c45cw\" (UniqueName: \"kubernetes.io/projected/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-kube-api-access-c45cw\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183269 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-config\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183295 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-env-overrides\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183336 4922 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183349 4922 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183382 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183391 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183401 4922 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183411 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183421 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183431 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183443 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183452 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183462 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183481 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183491 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183501 4922 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183511 4922 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183522 4922 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183532 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183542 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183553 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183563 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183573 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183583 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183594 4922 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183604 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183615 4922 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183625 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183635 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183645 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183655 4922 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183665 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183675 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183684 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183694 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183704 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183714 4922 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183725 4922 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183735 4922 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183745 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183756 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183766 4922 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183777 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183787 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183796 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183805 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183814 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183824 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183855 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183866 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183875 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183883 4922 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183892 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183901 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183911 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183919 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183928 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183937 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183945 4922 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183953 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183961 4922 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183970 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183980 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183988 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.183997 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184006 4922 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184015 4922 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184023 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184031 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184041 4922 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184050 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184059 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184068 4922 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184077 4922 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184085 4922 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184094 4922 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184103 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184111 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184120 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184128 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184137 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184145 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184153 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184161 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184170 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184178 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184187 4922 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184195 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184203 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184213 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184223 4922 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184232 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184240 4922 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184248 4922 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184402 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-multus-certs\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184442 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-systemd-units\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184466 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-ovn\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184816 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184789 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184887 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-slash\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184945 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184965 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-netd\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185020 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-kubelet\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185036 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-script-lib\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185096 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-host\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185101 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-netns\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185171 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-etc-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.184836 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-cni-bin\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185215 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-hostroot\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185217 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-os-release\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185277 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fdb7cedc-b2e3-48f0-80e0-e17073b43228-rootfs\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185333 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-cnibin\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185417 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-cnibin\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185472 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-socket-dir-parent\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185513 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-k8s-cni-cncf-io\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185549 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185587 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-var-lib-openvswitch\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185573 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-os-release\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185625 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-systemd\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185756 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-kubelet\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185772 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-bin\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185795 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-conf-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185816 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185874 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-system-cni-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185886 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-log-socket\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185941 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-ovn-kubernetes\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.185854 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-run-netns\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186003 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-cni-dir\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186063 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-host-var-lib-cni-multus\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186070 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-env-overrides\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186086 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/592c6351-c252-4c19-b3b1-167096be2de9-cni-binary-copy\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186106 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-node-log\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186096 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b4595ac-c521-4ada-950d-e1b01cdff99b-etc-kubernetes\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186070 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/067f44ac-9e60-4581-87cc-f2e1c823fc4c-hosts-file\") pod \"node-resolver-w46bt\" (UID: \"067f44ac-9e60-4581-87cc-f2e1c823fc4c\") " pod="openshift-dns/node-resolver-w46bt" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186309 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186345 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/592c6351-c252-4c19-b3b1-167096be2de9-system-cni-dir\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.186430 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9b4595ac-c521-4ada-950d-e1b01cdff99b-multus-daemon-config\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.187008 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-config\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.187305 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-serviceca\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.187938 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fdb7cedc-b2e3-48f0-80e0-e17073b43228-mcd-auth-proxy-config\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.188055 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/592c6351-c252-4c19-b3b1-167096be2de9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.188198 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b4595ac-c521-4ada-950d-e1b01cdff99b-cni-binary-copy\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.188960 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovn-node-metrics-cert\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.192098 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fdb7cedc-b2e3-48f0-80e0-e17073b43228-proxy-tls\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.208001 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.211320 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zqx\" (UniqueName: \"kubernetes.io/projected/592c6351-c252-4c19-b3b1-167096be2de9-kube-api-access-55zqx\") pod \"multus-additional-cni-plugins-26zbd\" (UID: \"592c6351-c252-4c19-b3b1-167096be2de9\") " pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.212019 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqgzp\" (UniqueName: \"kubernetes.io/projected/fdb7cedc-b2e3-48f0-80e0-e17073b43228-kube-api-access-mqgzp\") pod \"machine-config-daemon-znglx\" (UID: \"fdb7cedc-b2e3-48f0-80e0-e17073b43228\") " pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.212110 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c45cw\" (UniqueName: \"kubernetes.io/projected/3cd3723d-a12f-4c7c-a1ea-63bfef3c931a-kube-api-access-c45cw\") pod \"node-ca-q5qkb\" (UID: \"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\") " pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.212741 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26p2s\" (UniqueName: \"kubernetes.io/projected/653a41bb-bb1d-421c-a92b-7f2811d95edf-kube-api-access-26p2s\") pod \"ovnkube-node-wg4r5\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.212771 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6rh\" (UniqueName: \"kubernetes.io/projected/9b4595ac-c521-4ada-950d-e1b01cdff99b-kube-api-access-zv6rh\") pod \"multus-c9xzd\" (UID: \"9b4595ac-c521-4ada-950d-e1b01cdff99b\") " pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.214210 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ks8d\" (UniqueName: \"kubernetes.io/projected/067f44ac-9e60-4581-87cc-f2e1c823fc4c-kube-api-access-5ks8d\") pod \"node-resolver-w46bt\" (UID: \"067f44ac-9e60-4581-87cc-f2e1c823fc4c\") " pod="openshift-dns/node-resolver-w46bt" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.219617 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.228635 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.234140 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.241348 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.248330 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w46bt" Feb 18 11:37:00 crc kubenswrapper[4922]: W0218 11:37:00.248590 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-91a352607919a46845c4d118bbbb9255f0c9a7fa067c63dc230d7e24435313c3 WatchSource:0}: Error finding container 91a352607919a46845c4d118bbbb9255f0c9a7fa067c63dc230d7e24435313c3: Status 404 returned error can't find the container with id 91a352607919a46845c4d118bbbb9255f0c9a7fa067c63dc230d7e24435313c3 Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.251493 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.258200 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-26zbd" Feb 18 11:37:00 crc kubenswrapper[4922]: W0218 11:37:00.258831 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f9c954d8a26388280d54536ac050bf4b9fc46a86a540e8ea8272bbb404b2be12 WatchSource:0}: Error finding container f9c954d8a26388280d54536ac050bf4b9fc46a86a540e8ea8272bbb404b2be12: Status 404 returned error can't find the container with id f9c954d8a26388280d54536ac050bf4b9fc46a86a540e8ea8272bbb404b2be12 Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.263478 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c9xzd" Feb 18 11:37:00 crc kubenswrapper[4922]: W0218 11:37:00.263890 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod067f44ac_9e60_4581_87cc_f2e1c823fc4c.slice/crio-5b7c4d01264221ec01062dd316e3f65f1514dda9ec238d45b751f38f58ca7e76 WatchSource:0}: Error finding container 5b7c4d01264221ec01062dd316e3f65f1514dda9ec238d45b751f38f58ca7e76: Status 404 returned error can't find the container with id 5b7c4d01264221ec01062dd316e3f65f1514dda9ec238d45b751f38f58ca7e76 Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.272011 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.273775 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.279994 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q5qkb" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.282970 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.286979 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.293970 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.295017 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: W0218 11:37:00.301069 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b4595ac_c521_4ada_950d_e1b01cdff99b.slice/crio-369d3ce9933d0ba2aa5337839d9116d19adb4c543b8bd377f6a5a4cf7dbcd90d WatchSource:0}: Error finding container 369d3ce9933d0ba2aa5337839d9116d19adb4c543b8bd377f6a5a4cf7dbcd90d: Status 404 returned error can't find the container with id 369d3ce9933d0ba2aa5337839d9116d19adb4c543b8bd377f6a5a4cf7dbcd90d Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.309545 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: W0218 11:37:00.314805 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-54336f292dc20f541fbc275cfb15270551b4b10cabac510fcffaa8364cc0a3cc WatchSource:0}: Error finding container 54336f292dc20f541fbc275cfb15270551b4b10cabac510fcffaa8364cc0a3cc: Status 404 returned error can't find the container with id 54336f292dc20f541fbc275cfb15270551b4b10cabac510fcffaa8364cc0a3cc Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.324406 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.339961 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: W0218 11:37:00.349629 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cd3723d_a12f_4c7c_a1ea_63bfef3c931a.slice/crio-fea43d3481437797091f71baf35e55a9212d9c3819a4ee4d5a880815d5152258 WatchSource:0}: Error finding container fea43d3481437797091f71baf35e55a9212d9c3819a4ee4d5a880815d5152258: Status 404 returned error can't find the container with id fea43d3481437797091f71baf35e55a9212d9c3819a4ee4d5a880815d5152258 Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.360510 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.382109 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.398716 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:00 crc kubenswrapper[4922]: W0218 11:37:00.432377 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod653a41bb_bb1d_421c_a92b_7f2811d95edf.slice/crio-925e7e67a4fbe78eeed080fb6248ee3ea896e9f6ac32d11416ce15ab5e43d0fb WatchSource:0}: Error finding container 925e7e67a4fbe78eeed080fb6248ee3ea896e9f6ac32d11416ce15ab5e43d0fb: Status 404 returned error can't find the container with id 925e7e67a4fbe78eeed080fb6248ee3ea896e9f6ac32d11416ce15ab5e43d0fb Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.587963 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.588163 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:37:01.588146578 +0000 UTC m=+23.315850658 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.688728 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.688781 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.688817 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.688837 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.688908 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.688943 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.688973 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.688956 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.689003 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.689015 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.689053 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.688985 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.689626 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:01.688955207 +0000 UTC m=+23.416659287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.689708 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:01.689694764 +0000 UTC m=+23.417398844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.689736 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:01.689727425 +0000 UTC m=+23.417431505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:00 crc kubenswrapper[4922]: E0218 11:37:00.690420 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:01.689747535 +0000 UTC m=+23.417451615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.922741 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:34:29.173361707 +0000 UTC Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.976699 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.978043 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.980275 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.981672 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.982809 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.983772 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.984786 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.985987 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.987137 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.988045 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.989186 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.990418 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.991159 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.992546 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.993182 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.993843 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.994550 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.994971 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.995552 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.996174 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.996682 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.997244 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.997754 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.998704 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.999174 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 11:37:00 crc kubenswrapper[4922]: I0218 11:37:00.999798 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.000556 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.003078 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.005130 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.007286 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.008005 4922 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.008165 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.010169 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.011150 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.011885 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.013522 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.014560 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.015344 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.016345 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.017346 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.018983 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.019935 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.021830 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.022942 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.024292 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.026570 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.028548 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.029673 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.030559 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.031348 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.032118 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.033076 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.034130 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.035115 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.131860 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.131919 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.131935 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"184f28cf2d4378c04e9175430295b3af9bc0d81faedb6ccbf913076a666644cb"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.133500 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794" exitCode=0 Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.133579 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.133630 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"925e7e67a4fbe78eeed080fb6248ee3ea896e9f6ac32d11416ce15ab5e43d0fb"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.142655 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.142718 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"91a352607919a46845c4d118bbbb9255f0c9a7fa067c63dc230d7e24435313c3"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.145591 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w46bt" event={"ID":"067f44ac-9e60-4581-87cc-f2e1c823fc4c","Type":"ContainerStarted","Data":"c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.145632 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w46bt" event={"ID":"067f44ac-9e60-4581-87cc-f2e1c823fc4c","Type":"ContainerStarted","Data":"5b7c4d01264221ec01062dd316e3f65f1514dda9ec238d45b751f38f58ca7e76"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.148476 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.148515 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.148528 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f9c954d8a26388280d54536ac050bf4b9fc46a86a540e8ea8272bbb404b2be12"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.150915 4922 generic.go:334] "Generic (PLEG): container finished" podID="592c6351-c252-4c19-b3b1-167096be2de9" containerID="b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e" exitCode=0 Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.150974 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" event={"ID":"592c6351-c252-4c19-b3b1-167096be2de9","Type":"ContainerDied","Data":"b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.150993 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" event={"ID":"592c6351-c252-4c19-b3b1-167096be2de9","Type":"ContainerStarted","Data":"279f20a7de4fb8ce518321ba3f9ea1dfd8f527c83b87cfc76af3a8271e76a690"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.153920 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"54336f292dc20f541fbc275cfb15270551b4b10cabac510fcffaa8364cc0a3cc"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.154548 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.156294 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q5qkb" event={"ID":"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a","Type":"ContainerStarted","Data":"61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.156378 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q5qkb" event={"ID":"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a","Type":"ContainerStarted","Data":"fea43d3481437797091f71baf35e55a9212d9c3819a4ee4d5a880815d5152258"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.160894 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c9xzd" event={"ID":"9b4595ac-c521-4ada-950d-e1b01cdff99b","Type":"ContainerStarted","Data":"83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.160936 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c9xzd" event={"ID":"9b4595ac-c521-4ada-950d-e1b01cdff99b","Type":"ContainerStarted","Data":"369d3ce9933d0ba2aa5337839d9116d19adb4c543b8bd377f6a5a4cf7dbcd90d"} Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.168744 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.181174 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.191929 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.201862 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.224076 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.234250 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.247583 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.258493 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.271703 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.290186 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.306175 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.318770 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.339521 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.354006 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.374381 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.398679 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.416624 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.430113 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.453585 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.466135 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.482759 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.494759 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.514005 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.538344 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.551053 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.569729 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.581141 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:01Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.603507 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.603681 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:37:03.603662828 +0000 UTC m=+25.331366908 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.704674 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.704728 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.704770 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.704798 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.704892 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.704927 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.704958 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:03.704942028 +0000 UTC m=+25.432646128 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.704961 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.704980 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:03.704967518 +0000 UTC m=+25.432671598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.705000 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.705018 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.704973 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.705088 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:03.705067471 +0000 UTC m=+25.432771611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.705088 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.705115 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.705153 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:03.705141482 +0000 UTC m=+25.432845562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.923045 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:29:19.51094808 +0000 UTC Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.972608 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.972648 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:01 crc kubenswrapper[4922]: I0218 11:37:01.972664 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.972759 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.972809 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:01 crc kubenswrapper[4922]: E0218 11:37:01.972870 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.166247 4922 generic.go:334] "Generic (PLEG): container finished" podID="592c6351-c252-4c19-b3b1-167096be2de9" containerID="89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e" exitCode=0 Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.166398 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" event={"ID":"592c6351-c252-4c19-b3b1-167096be2de9","Type":"ContainerDied","Data":"89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e"} Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.174046 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.174102 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.174116 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.174127 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.174138 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.174152 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.188199 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.229684 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.263448 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.293298 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.310848 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.326506 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.342396 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.354178 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.368792 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.390219 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.425848 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.446893 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.460444 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.475631 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.895849 4922 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 11:37:02 crc kubenswrapper[4922]: I0218 11:37:02.923640 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:49:26.847003208 +0000 UTC Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.179471 4922 generic.go:334] "Generic (PLEG): container finished" podID="592c6351-c252-4c19-b3b1-167096be2de9" containerID="6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00" exitCode=0 Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.179558 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" event={"ID":"592c6351-c252-4c19-b3b1-167096be2de9","Type":"ContainerDied","Data":"6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00"} Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.182995 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04"} Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.214045 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.240830 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.258965 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.273752 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.288487 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.300786 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.313887 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.330790 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.344587 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.360202 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.374773 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.388062 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.403977 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.418298 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.431523 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.448124 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.463771 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.478495 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.497318 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.510785 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.525150 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.538418 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.560513 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.586782 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.603992 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.619285 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.627834 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.627996 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:37:07.627973772 +0000 UTC m=+29.355677852 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.634135 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.646440 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.729254 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.729310 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.729346 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.729395 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729496 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729548 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729567 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729578 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729586 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:07.729565807 +0000 UTC m=+29.457269887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729626 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:07.729613659 +0000 UTC m=+29.457317739 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729671 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729689 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:07.72968372 +0000 UTC m=+29.457387800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729728 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729737 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729744 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.729761 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:07.729755892 +0000 UTC m=+29.457459972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.924786 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:48:38.53131069 +0000 UTC Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.972729 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.972762 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.972867 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:03 crc kubenswrapper[4922]: I0218 11:37:03.972729 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.972966 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:03 crc kubenswrapper[4922]: E0218 11:37:03.973057 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.143424 4922 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.145603 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.145661 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.145674 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.145750 4922 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.153559 4922 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.153788 4922 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.154834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.154896 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.154913 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.154951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.154966 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: E0218 11:37:04.170647 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.175317 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.175340 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.175351 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.175378 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.175391 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.189796 4922 generic.go:334] "Generic (PLEG): container finished" podID="592c6351-c252-4c19-b3b1-167096be2de9" containerID="66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5" exitCode=0 Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.189909 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" event={"ID":"592c6351-c252-4c19-b3b1-167096be2de9","Type":"ContainerDied","Data":"66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5"} Feb 18 11:37:04 crc kubenswrapper[4922]: E0218 11:37:04.193874 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.197646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.197685 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.197697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.197720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.197733 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.198059 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.211848 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: E0218 11:37:04.216247 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.220744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.220806 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.220818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.220837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.220850 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.229135 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: E0218 11:37:04.237413 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.240671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.240701 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.240714 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.240733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.240746 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.250032 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: E0218 11:37:04.254716 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: E0218 11:37:04.254867 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.258412 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.258475 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.258517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.258556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.258583 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.266378 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.280486 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.301654 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.313155 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.329700 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.344685 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.362924 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.363762 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.363823 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.363837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.363862 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.363879 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.379126 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.392507 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.412823 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.427039 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.467180 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.467219 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.467232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.467252 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.467268 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.570277 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.570333 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.570345 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.570376 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.570387 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.673459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.673512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.673525 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.673546 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.673560 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.776532 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.776886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.776926 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.776963 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.776986 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.879886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.879937 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.879951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.879974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.879988 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.925327 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:12:23.933964386 +0000 UTC Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.985552 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.985630 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.985645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.985677 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:04 crc kubenswrapper[4922]: I0218 11:37:04.985697 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:04Z","lastTransitionTime":"2026-02-18T11:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.088529 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.088587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.088604 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.088629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.088649 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.117039 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.122435 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.132017 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.143423 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.161724 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.186176 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.191300 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.191384 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.191406 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.191433 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.191453 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.205601 4922 generic.go:334] "Generic (PLEG): container finished" podID="592c6351-c252-4c19-b3b1-167096be2de9" containerID="c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959" exitCode=0 Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.205698 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" event={"ID":"592c6351-c252-4c19-b3b1-167096be2de9","Type":"ContainerDied","Data":"c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.211837 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.231071 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.256508 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.282131 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.293589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.293649 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.293664 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.293689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.293707 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.302449 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.338717 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.357271 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.382252 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.397243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.397293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.397307 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.397329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.397343 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.399914 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.425378 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.437802 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.457174 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.475736 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.494136 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.500241 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.500335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.500401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.500441 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.500467 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.511587 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.528805 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.542721 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.557077 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.577226 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.592880 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.603483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.603533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.603543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.603563 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.603579 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.620183 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.649748 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.668385 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.682403 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.696933 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.707383 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.707424 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.707436 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.707454 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.707464 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.712582 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:05Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.811009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.811093 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.811142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.811173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.811198 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.914231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.914295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.914314 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.914339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.914385 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:05Z","lastTransitionTime":"2026-02-18T11:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.925546 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:25:23.86038241 +0000 UTC Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.972469 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.972532 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:05 crc kubenswrapper[4922]: E0218 11:37:05.972573 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:05 crc kubenswrapper[4922]: I0218 11:37:05.972611 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:05 crc kubenswrapper[4922]: E0218 11:37:05.972774 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:05 crc kubenswrapper[4922]: E0218 11:37:05.972968 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.016988 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.017029 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.017038 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.017053 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.017064 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.120798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.120838 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.120846 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.120863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.120873 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.223494 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.223533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.223541 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.223562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.223572 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.223563 4922 generic.go:334] "Generic (PLEG): container finished" podID="592c6351-c252-4c19-b3b1-167096be2de9" containerID="37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296" exitCode=0 Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.223641 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" event={"ID":"592c6351-c252-4c19-b3b1-167096be2de9","Type":"ContainerDied","Data":"37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.245953 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.263628 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.283766 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.300478 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.319832 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.327316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.327419 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.327448 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.327482 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.327512 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.340087 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.357818 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.377479 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.408687 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.424113 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.430421 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.430487 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.430513 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.430543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.430567 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.436071 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.446975 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.462968 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.488378 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.503239 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.533187 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.533230 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.533243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.533261 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.533273 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.636245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.636305 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.636320 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.636345 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.636384 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.739100 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.739208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.739233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.739265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.739287 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.842440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.842488 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.842500 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.842523 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.842537 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.926374 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:42:27.171890642 +0000 UTC Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.946650 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.947090 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.947110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.947140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:06 crc kubenswrapper[4922]: I0218 11:37:06.947160 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:06Z","lastTransitionTime":"2026-02-18T11:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.049911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.049949 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.049959 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.049975 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.049986 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.153271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.153325 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.153333 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.153351 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.153384 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.256596 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.256703 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.256723 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.256805 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.256825 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.360702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.360750 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.360761 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.360780 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.360795 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.463125 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.463184 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.463206 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.463228 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.463245 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.485120 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.486016 4922 scope.go:117] "RemoveContainer" containerID="4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd" Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.486335 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.566885 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.566971 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.566993 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.567027 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.567051 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.670353 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.670422 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.670437 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.670458 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.670473 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.676326 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.676463 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:37:15.676446509 +0000 UTC m=+37.404150589 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.772427 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.772463 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.772472 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.772488 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.772499 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.777070 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.777129 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.777178 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.777220 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777249 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777276 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777315 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:15.777295466 +0000 UTC m=+37.504999546 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777330 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:15.777324197 +0000 UTC m=+37.505028277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777371 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777398 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777415 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777483 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:15.777471341 +0000 UTC m=+37.505175641 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777580 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777610 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777626 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.777701 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:15.777679676 +0000 UTC m=+37.505383776 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.875809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.875854 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.875864 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.875882 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.875896 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.926937 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:48:02.920680787 +0000 UTC Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.972413 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.972463 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.972545 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.972674 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.972736 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:07 crc kubenswrapper[4922]: E0218 11:37:07.972890 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.979094 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.979152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.979170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.979199 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:07 crc kubenswrapper[4922]: I0218 11:37:07.979217 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:07Z","lastTransitionTime":"2026-02-18T11:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.083123 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.083189 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.083212 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.083244 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.083267 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.186162 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.186195 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.186204 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.186224 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.186235 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.232497 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" event={"ID":"592c6351-c252-4c19-b3b1-167096be2de9","Type":"ContainerStarted","Data":"fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.253661 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.254146 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.254273 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.256696 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.269971 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.285436 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.288580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.288612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.288624 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.288643 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.288654 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.288766 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.288860 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.298846 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.308714 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.320129 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.330685 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.343231 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.360114 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.382854 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.392512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.392562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.392573 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.392592 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.392621 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.396250 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.408788 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.423834 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.437349 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.449590 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.467804 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.483899 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.495661 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.495699 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.495711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.495733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.495744 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.498340 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.511143 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.524601 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.545532 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.561351 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.576730 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.590425 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.598575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.598652 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.598667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.598689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.598912 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.604896 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.634940 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.650478 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.666959 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.683604 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.702551 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.702620 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.702638 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.702659 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.702671 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.704039 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.805686 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.805768 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.805796 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.805829 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.805854 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.908942 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.909033 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.909059 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.909094 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.909118 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:08Z","lastTransitionTime":"2026-02-18T11:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.927287 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:20:31.770359106 +0000 UTC Feb 18 11:37:08 crc kubenswrapper[4922]: I0218 11:37:08.997591 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.012886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.012966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.012987 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.013014 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.013031 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.020476 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.034122 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.054571 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.071670 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.088410 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.115140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.115483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.115587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.115670 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.115768 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.116975 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.146468 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.165027 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.178497 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.191259 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.202678 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.218018 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.218795 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.218838 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.218850 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.218891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.218904 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.231512 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.249858 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.255869 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.322067 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.322108 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.322121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.322138 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.322150 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.425464 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.425681 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.425707 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.425722 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.425732 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.529137 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.529186 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.529203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.529226 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.529251 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.632877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.632947 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.632970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.633002 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.633026 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.736595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.736986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.737002 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.737021 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.737035 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.840854 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.840929 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.840941 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.840964 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.840979 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.927626 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:51:40.002640609 +0000 UTC Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.943679 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.943733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.943742 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.943756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.943766 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:09Z","lastTransitionTime":"2026-02-18T11:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.973112 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.973184 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:09 crc kubenswrapper[4922]: I0218 11:37:09.973163 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:09 crc kubenswrapper[4922]: E0218 11:37:09.973333 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:09 crc kubenswrapper[4922]: E0218 11:37:09.973451 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:09 crc kubenswrapper[4922]: E0218 11:37:09.973599 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.047959 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.048022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.048041 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.048069 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.048087 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.151559 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.151627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.151646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.151675 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.151726 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.254998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.255045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.255057 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.255073 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.255085 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.260297 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.366099 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.366152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.366206 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.366222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.366234 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.468209 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.468240 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.468248 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.468281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.468291 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.570584 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.570628 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.570665 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.570681 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.570692 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.674660 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.674697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.674709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.674725 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.674738 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.776959 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.776995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.777003 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.777016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.777025 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.880425 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.880487 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.880505 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.880527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.880543 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.928438 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:15:19.402143584 +0000 UTC Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.983027 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.983140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.983151 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.983166 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:10 crc kubenswrapper[4922]: I0218 11:37:10.983180 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:10Z","lastTransitionTime":"2026-02-18T11:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.085296 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.085326 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.085335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.085347 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.085356 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.187461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.187503 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.187513 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.187527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.187539 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.289607 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.289637 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.289645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.289657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.289665 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.392348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.392416 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.392429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.392451 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.392466 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.495646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.495716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.495730 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.495756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.495772 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.598577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.598646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.598656 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.598673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.598682 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.702127 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.702213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.702235 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.702276 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.702295 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.791454 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs"] Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.792216 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.796831 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.800595 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.805830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.805878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.805891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.805909 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.805920 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.812840 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.838579 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.859683 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.878708 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/648f85d5-dbc6-4db6-b590-3edc96740212-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.878796 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8l2m\" (UniqueName: \"kubernetes.io/projected/648f85d5-dbc6-4db6-b590-3edc96740212-kube-api-access-t8l2m\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.878845 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/648f85d5-dbc6-4db6-b590-3edc96740212-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.878876 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/648f85d5-dbc6-4db6-b590-3edc96740212-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.879920 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.899444 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.909068 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.909139 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.909157 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.909191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.909212 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:11Z","lastTransitionTime":"2026-02-18T11:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.919887 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.929406 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:50:41.61267516 +0000 UTC Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.937993 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.955070 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.972580 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.972753 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.972754 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:11 crc kubenswrapper[4922]: E0218 11:37:11.972973 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:11 crc kubenswrapper[4922]: E0218 11:37:11.973057 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:11 crc kubenswrapper[4922]: E0218 11:37:11.973128 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.979274 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/648f85d5-dbc6-4db6-b590-3edc96740212-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.979335 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8l2m\" (UniqueName: \"kubernetes.io/projected/648f85d5-dbc6-4db6-b590-3edc96740212-kube-api-access-t8l2m\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.979406 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/648f85d5-dbc6-4db6-b590-3edc96740212-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.979445 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/648f85d5-dbc6-4db6-b590-3edc96740212-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.980217 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/648f85d5-dbc6-4db6-b590-3edc96740212-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.980936 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/648f85d5-dbc6-4db6-b590-3edc96740212-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.990236 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:11Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:11 crc kubenswrapper[4922]: I0218 11:37:11.990524 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/648f85d5-dbc6-4db6-b590-3edc96740212-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.001311 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8l2m\" (UniqueName: \"kubernetes.io/projected/648f85d5-dbc6-4db6-b590-3edc96740212-kube-api-access-t8l2m\") pod \"ovnkube-control-plane-749d76644c-hjnxs\" (UID: \"648f85d5-dbc6-4db6-b590-3edc96740212\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.018679 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.018733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.018743 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.018759 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.018769 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.021353 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.034953 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.048574 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.074551 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.096770 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.120084 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.120940 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.122319 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.122343 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.122357 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.122400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.122414 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.168122 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.227892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.227972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.227984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.228002 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.228014 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.268661 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" event={"ID":"648f85d5-dbc6-4db6-b590-3edc96740212","Type":"ContainerStarted","Data":"1bcc10b804e00f4e192af1480d9a0c3b7bddddf3bf72aaefae3d196eefc623b3"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.270730 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/0.log" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.272976 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6" exitCode=1 Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.273021 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.273767 4922 scope.go:117] "RemoveContainer" containerID="b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.291000 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.305797 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.321855 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.332978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.333245 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.333266 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.333555 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.333920 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.334022 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.347293 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.363468 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.379856 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.408467 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"r *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.244406 6237 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.245284 6237 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:37:11.245412 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 11:37:11.245452 6237 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 11:37:11.245461 6237 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 11:37:11.245483 6237 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:37:11.245493 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:37:11.245499 6237 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:37:11.245735 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:11.245804 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:37:11.245838 6237 factory.go:656] Stopping watch factory\\\\nI0218 11:37:11.245855 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:11.245876 6237 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.428167 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.436507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.436554 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.436564 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.436580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.436590 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.446881 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.464310 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.478271 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.494600 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.513931 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.535343 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.539218 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.539250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.539259 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.539277 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.539288 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.551867 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.640919 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.640976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.640984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.640996 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.641007 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.744044 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.744097 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.744115 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.744136 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.744152 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.846680 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.846727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.846743 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.846763 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.846778 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.929705 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:37:13.876288719 +0000 UTC Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.949836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.949887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.949905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.949929 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:12 crc kubenswrapper[4922]: I0218 11:37:12.949947 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:12Z","lastTransitionTime":"2026-02-18T11:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.051912 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.051978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.051990 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.052006 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.052018 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.155046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.155086 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.155097 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.155114 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.155126 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.258195 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.258232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.258241 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.258257 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.258293 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.360730 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.360768 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.360778 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.360791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.360800 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.463023 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.463060 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.463068 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.463082 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.463099 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.565600 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.565648 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.565659 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.565676 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.565690 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.668128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.668191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.668208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.668234 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.668263 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.728070 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pspfr"] Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.728965 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:13 crc kubenswrapper[4922]: E0218 11:37:13.729092 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.748019 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.765232 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.770387 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.770431 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.770442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.770458 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.770470 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.817868 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.817931 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7224x\" (UniqueName: \"kubernetes.io/projected/4702cf45-b47b-4291-a553-5bfc7bc22674-kube-api-access-7224x\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.819645 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.840263 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.853668 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.864690 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.872587 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.872648 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.872665 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.872688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.872703 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.880150 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.893295 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.912430 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.918629 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7224x\" (UniqueName: \"kubernetes.io/projected/4702cf45-b47b-4291-a553-5bfc7bc22674-kube-api-access-7224x\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.918723 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:13 crc kubenswrapper[4922]: E0218 11:37:13.918962 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:13 crc kubenswrapper[4922]: E0218 11:37:13.919025 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs podName:4702cf45-b47b-4291-a553-5bfc7bc22674 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:14.419004649 +0000 UTC m=+36.146708739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs") pod "network-metrics-daemon-pspfr" (UID: "4702cf45-b47b-4291-a553-5bfc7bc22674") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.930517 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:35:41.676364415 +0000 UTC Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.931803 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.941846 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7224x\" (UniqueName: \"kubernetes.io/projected/4702cf45-b47b-4291-a553-5bfc7bc22674-kube-api-access-7224x\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.947970 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.971215 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.972252 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.972269 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.972267 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:13 crc kubenswrapper[4922]: E0218 11:37:13.972475 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:13 crc kubenswrapper[4922]: E0218 11:37:13.972608 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:13 crc kubenswrapper[4922]: E0218 11:37:13.972719 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.974971 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.975029 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.975045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.975067 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.975081 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:13Z","lastTransitionTime":"2026-02-18T11:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:13 crc kubenswrapper[4922]: I0218 11:37:13.988850 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:13Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.003251 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.019382 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.036476 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.059739 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"r *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.244406 6237 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.245284 6237 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:37:11.245412 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 11:37:11.245452 6237 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 11:37:11.245461 6237 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 11:37:11.245483 6237 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:37:11.245493 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:37:11.245499 6237 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:37:11.245735 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:11.245804 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:37:11.245838 6237 factory.go:656] Stopping watch factory\\\\nI0218 11:37:11.245855 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:11.245876 6237 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.076853 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.076887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.076898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.076915 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.076929 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.180010 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.180072 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.180087 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.180111 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.180126 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.282220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.282264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.282274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.282294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.282310 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.283496 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" event={"ID":"648f85d5-dbc6-4db6-b590-3edc96740212","Type":"ContainerStarted","Data":"61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.283543 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" event={"ID":"648f85d5-dbc6-4db6-b590-3edc96740212","Type":"ContainerStarted","Data":"2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.286492 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/0.log" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.297520 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.297716 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.298467 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.309085 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.321486 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.333937 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.341535 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.341588 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.341602 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.341624 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.341642 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.350192 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: E0218 11:37:14.353426 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.358891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.358923 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.358932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.358947 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.358957 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.363985 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: E0218 11:37:14.369227 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.371982 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.372007 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.372016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.372032 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.372042 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.374671 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: E0218 11:37:14.384306 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.385998 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.388155 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.388191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.388203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.388220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.388231 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.396092 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: E0218 11:37:14.402878 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.408005 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.408645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.408688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.408701 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.408720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.408732 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: E0218 11:37:14.422212 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: E0218 11:37:14.422400 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.422417 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.422793 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:14 crc kubenswrapper[4922]: E0218 11:37:14.422954 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:14 crc kubenswrapper[4922]: E0218 11:37:14.423029 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs podName:4702cf45-b47b-4291-a553-5bfc7bc22674 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:15.42301068 +0000 UTC m=+37.150714760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs") pod "network-metrics-daemon-pspfr" (UID: "4702cf45-b47b-4291-a553-5bfc7bc22674") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.425007 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.425039 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.425051 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.425071 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.425087 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.439103 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.449760 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.470705 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"r *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.244406 6237 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.245284 6237 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:37:11.245412 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 11:37:11.245452 6237 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 11:37:11.245461 6237 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 11:37:11.245483 6237 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:37:11.245493 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:37:11.245499 6237 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:37:11.245735 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:11.245804 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:37:11.245838 6237 factory.go:656] Stopping watch factory\\\\nI0218 11:37:11.245855 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:11.245876 6237 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.490280 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.506286 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.521691 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.527499 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.527546 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.527558 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.527576 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.527592 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.538141 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.551462 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.563910 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.578287 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.604693 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"r *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.244406 6237 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.245284 6237 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:37:11.245412 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 11:37:11.245452 6237 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 11:37:11.245461 6237 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 11:37:11.245483 6237 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:37:11.245493 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:37:11.245499 6237 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:37:11.245735 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:11.245804 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:37:11.245838 6237 factory.go:656] Stopping watch factory\\\\nI0218 11:37:11.245855 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:11.245876 6237 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.631892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.631968 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.631985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.632009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.632025 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.637057 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.648125 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.666522 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.687484 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.701724 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.717427 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.734664 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.736432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.736496 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.736515 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.736543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.736569 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.749419 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.767568 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.784865 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.797136 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.810105 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.838818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.838859 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.838869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.838884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.838895 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.931260 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:34:29.893011706 +0000 UTC Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.941272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.941317 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.941328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.941344 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:14 crc kubenswrapper[4922]: I0218 11:37:14.941356 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:14Z","lastTransitionTime":"2026-02-18T11:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.044760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.044806 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.044817 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.044836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.044848 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.147270 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.147407 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.147421 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.147439 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.147459 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.250815 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.250870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.250882 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.250902 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.250914 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.306599 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/1.log" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.307829 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/0.log" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.311356 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746" exitCode=1 Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.311573 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.311676 4922 scope.go:117] "RemoveContainer" containerID="b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.312858 4922 scope.go:117] "RemoveContainer" containerID="a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746" Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.313150 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.336349 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.355095 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.355185 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.355201 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.355224 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.355239 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.357677 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.375502 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.387203 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.399204 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.410944 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.421044 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.431205 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.433417 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.433601 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.433658 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs podName:4702cf45-b47b-4291-a553-5bfc7bc22674 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:17.433644643 +0000 UTC m=+39.161348723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs") pod "network-metrics-daemon-pspfr" (UID: "4702cf45-b47b-4291-a553-5bfc7bc22674") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.443068 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.453050 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.458822 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.458866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.458876 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.458897 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.458917 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.463741 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.487593 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"r *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.244406 6237 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.245284 6237 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:37:11.245412 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 11:37:11.245452 6237 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 11:37:11.245461 6237 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 11:37:11.245483 6237 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:37:11.245493 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:37:11.245499 6237 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:37:11.245735 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:11.245804 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:37:11.245838 6237 factory.go:656] Stopping watch factory\\\\nI0218 11:37:11.245855 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:11.245876 6237 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"rking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0218 11:37:14.424128 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424192 6409 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424197 6409 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0218 11:37:14.424238 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 11:37:14.424249 6409 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 11:37:14.424254 6409 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424121 6409 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0218 11:37:14.423917 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.507555 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.521467 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.545074 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.559242 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.561420 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.561721 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.561766 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.561790 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.561802 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.571009 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.665759 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.665816 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.665828 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.665877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.665890 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.737108 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.737394 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:37:31.737312652 +0000 UTC m=+53.465016772 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.769291 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.769348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.769391 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.769417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.769436 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.839525 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.839592 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.839655 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.839709 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839778 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839812 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839831 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839836 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839831 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839867 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839897 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:31.83987587 +0000 UTC m=+53.567579970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839905 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839920 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:31.83990949 +0000 UTC m=+53.567613590 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839926 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839942 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:31.839933001 +0000 UTC m=+53.567637091 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.839995 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:31.839970572 +0000 UTC m=+53.567674692 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.874483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.874543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.874557 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.874582 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.874600 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.932679 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:10:49.156870836 +0000 UTC Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.972805 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.972931 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.973226 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.973663 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.973959 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.973995 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.974206 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:15 crc kubenswrapper[4922]: E0218 11:37:15.974469 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.977213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.977305 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.977469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.977897 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:15 crc kubenswrapper[4922]: I0218 11:37:15.978019 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:15Z","lastTransitionTime":"2026-02-18T11:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.081016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.081076 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.081094 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.081121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.081140 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.184311 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.184417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.184436 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.184462 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.184484 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.288387 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.288461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.288495 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.288528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.288549 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.318770 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/1.log" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.392595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.392649 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.392668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.392693 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.392709 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.495539 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.495585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.495595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.495614 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.495627 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.597975 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.598041 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.598056 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.598082 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.598104 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.701009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.701066 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.701081 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.701106 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.701122 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.804323 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.804397 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.804411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.804432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.804456 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.907749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.907824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.907841 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.907908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.907930 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:16Z","lastTransitionTime":"2026-02-18T11:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:16 crc kubenswrapper[4922]: I0218 11:37:16.933236 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:05:20.182744194 +0000 UTC Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.011407 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.011468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.011483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.011525 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.011541 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.114912 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.114960 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.114973 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.114991 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.115003 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.218399 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.218453 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.218467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.218485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.218501 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.321898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.321952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.321965 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.321985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.321998 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.425097 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.425163 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.425183 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.425209 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.425229 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.457823 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:17 crc kubenswrapper[4922]: E0218 11:37:17.458026 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:17 crc kubenswrapper[4922]: E0218 11:37:17.458142 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs podName:4702cf45-b47b-4291-a553-5bfc7bc22674 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:21.458120507 +0000 UTC m=+43.185824587 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs") pod "network-metrics-daemon-pspfr" (UID: "4702cf45-b47b-4291-a553-5bfc7bc22674") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.528844 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.528899 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.528908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.528927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.528938 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.632726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.632782 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.632795 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.632812 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.632827 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.735653 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.735702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.735710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.735726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.735736 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.838728 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.839344 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.839432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.839472 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.839517 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.934113 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:59:59.31929538 +0000 UTC Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.942783 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.942833 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.942847 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.942866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.942880 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:17Z","lastTransitionTime":"2026-02-18T11:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.973042 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:17 crc kubenswrapper[4922]: E0218 11:37:17.973235 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.973347 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:17 crc kubenswrapper[4922]: E0218 11:37:17.973526 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.973637 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:17 crc kubenswrapper[4922]: E0218 11:37:17.973709 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:17 crc kubenswrapper[4922]: I0218 11:37:17.973761 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:17 crc kubenswrapper[4922]: E0218 11:37:17.973813 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.046200 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.046248 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.046264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.046288 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.046304 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.149565 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.149655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.149716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.149748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.149771 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.252526 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.252567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.252579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.252597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.252612 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.356260 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.356332 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.356354 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.356417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.356433 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.459086 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.459132 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.459142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.459159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.459174 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.562176 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.562243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.562258 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.562275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.562286 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.664994 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.665036 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.665049 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.665064 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.665076 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.767830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.767898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.767911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.767935 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.767948 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.871080 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.871127 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.871140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.871155 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.871166 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.934728 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:07:37.729716804 +0000 UTC Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.973271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.973322 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.973332 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.973391 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.973407 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:18Z","lastTransitionTime":"2026-02-18T11:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:18 crc kubenswrapper[4922]: I0218 11:37:18.985994 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:18Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.007267 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.024734 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.040602 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.053743 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.073013 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.075546 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.075578 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.075589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.075607 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.075617 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.094620 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.106781 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.118834 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.133035 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.157652 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"r *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.244406 6237 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.245284 6237 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:37:11.245412 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 11:37:11.245452 6237 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 11:37:11.245461 6237 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 11:37:11.245483 6237 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:37:11.245493 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:37:11.245499 6237 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:37:11.245735 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:11.245804 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:37:11.245838 6237 factory.go:656] Stopping watch factory\\\\nI0218 11:37:11.245855 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:11.245876 6237 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"rking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0218 11:37:14.424128 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424192 6409 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424197 6409 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0218 11:37:14.424238 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 11:37:14.424249 6409 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 11:37:14.424254 6409 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424121 6409 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0218 11:37:14.423917 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.180302 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.180357 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.180414 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.180448 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.180471 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.180309 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.193513 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.210440 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.232729 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.246624 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.261319 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.284213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.284274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.284287 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.284309 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.284326 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.388324 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.388419 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.388432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.388450 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.388461 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.491544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.491607 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.491621 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.491640 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.491653 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.596075 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.596132 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.596146 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.596167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.596185 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.699533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.699583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.699595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.699619 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.699636 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.802664 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.802711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.802721 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.802737 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.802745 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.906327 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.906469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.906490 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.906519 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.906545 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:19Z","lastTransitionTime":"2026-02-18T11:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.935767 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:43:39.867443601 +0000 UTC Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.972503 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.972585 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.972616 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:19 crc kubenswrapper[4922]: E0218 11:37:19.972800 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:19 crc kubenswrapper[4922]: I0218 11:37:19.972846 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:19 crc kubenswrapper[4922]: E0218 11:37:19.972906 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:19 crc kubenswrapper[4922]: E0218 11:37:19.973028 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:19 crc kubenswrapper[4922]: E0218 11:37:19.973218 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.010586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.010657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.010677 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.010720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.010760 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.114761 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.114875 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.114894 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.114918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.114935 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.218453 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.218529 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.218568 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.218604 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.218629 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.322167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.322267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.322292 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.322321 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.322346 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.425249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.425299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.425310 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.425329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.425347 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.528260 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.528335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.528390 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.528426 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.528448 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.631918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.631992 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.632011 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.632035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.632052 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.735314 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.735417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.735439 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.735465 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.735483 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.841850 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.841921 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.841933 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.841947 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.842010 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.936761 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:14:05.63413404 +0000 UTC Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.944431 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.944477 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.944493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.944515 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.944530 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:20Z","lastTransitionTime":"2026-02-18T11:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:20 crc kubenswrapper[4922]: I0218 11:37:20.973337 4922 scope.go:117] "RemoveContainer" containerID="4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.048165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.048232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.048249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.048278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.048346 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.150660 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.150748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.150798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.150824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.150873 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.253000 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.253038 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.253046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.253058 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.253067 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.343127 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.345311 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.345683 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.355582 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.355636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.355652 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.355677 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.355695 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.364407 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.381109 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.403328 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.416242 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.431628 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.447946 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.458918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.459331 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.459504 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.459645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.459767 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.464504 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.477910 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.489590 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.498263 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.503667 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:21 crc kubenswrapper[4922]: E0218 11:37:21.503835 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:21 crc kubenswrapper[4922]: E0218 11:37:21.503943 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs podName:4702cf45-b47b-4291-a553-5bfc7bc22674 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:29.50392515 +0000 UTC m=+51.231629230 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs") pod "network-metrics-daemon-pspfr" (UID: "4702cf45-b47b-4291-a553-5bfc7bc22674") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.562285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.562342 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.562355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.562389 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.562401 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.564057 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.583163 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.595748 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.606219 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.618279 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.637052 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e80f07cd7247e51069bf2b1dbdb4c9276cf6ea8904cc3f193a79c72f996bc6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"message\\\":\\\"r *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.244406 6237 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:11.245284 6237 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:37:11.245412 6237 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 11:37:11.245452 6237 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 11:37:11.245461 6237 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 11:37:11.245483 6237 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:37:11.245493 6237 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:37:11.245499 6237 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:37:11.245735 6237 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:11.245804 6237 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:37:11.245838 6237 factory.go:656] Stopping watch factory\\\\nI0218 11:37:11.245855 6237 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:11.245876 6237 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"rking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0218 11:37:14.424128 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424192 6409 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424197 6409 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0218 11:37:14.424238 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 11:37:14.424249 6409 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 11:37:14.424254 6409 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424121 6409 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0218 11:37:14.423917 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.647913 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:21Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.664268 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.664297 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.664305 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.664318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.664328 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.767051 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.767115 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.767134 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.767158 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.767174 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.870165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.870221 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.870232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.870251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.870266 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.937802 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:34:01.704519731 +0000 UTC Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.972296 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.972339 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:21 crc kubenswrapper[4922]: E0218 11:37:21.972446 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.972446 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.972486 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:21 crc kubenswrapper[4922]: E0218 11:37:21.972745 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:21 crc kubenswrapper[4922]: E0218 11:37:21.972904 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:21 crc kubenswrapper[4922]: E0218 11:37:21.973019 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.974196 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.974265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.974289 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.974323 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:21 crc kubenswrapper[4922]: I0218 11:37:21.974345 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:21Z","lastTransitionTime":"2026-02-18T11:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.077926 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.077981 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.077991 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.078012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.078023 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.180886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.180946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.180967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.180995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.181017 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.284741 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.284790 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.284799 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.284814 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.284823 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.387282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.387335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.387346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.387402 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.387418 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.490114 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.490144 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.490155 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.490172 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.490185 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.593616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.593692 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.593716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.593747 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.593768 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.697290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.697426 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.697442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.697467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.697485 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.799632 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.799683 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.799697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.799717 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.799729 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.902754 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.902809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.902818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.902833 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.902842 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:22Z","lastTransitionTime":"2026-02-18T11:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:22 crc kubenswrapper[4922]: I0218 11:37:22.938498 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:46:56.867062781 +0000 UTC Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.004849 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.004894 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.004905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.004920 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.004932 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.107665 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.107713 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.107726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.107741 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.107754 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.210620 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.210872 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.210943 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.211011 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.211077 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.313605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.313880 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.313946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.314026 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.314092 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.416612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.416867 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.416932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.416993 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.417059 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.519023 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.519076 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.519088 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.519102 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.519112 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.622111 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.622157 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.622167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.622183 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.622193 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.725915 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.726327 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.726440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.726580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.726673 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.829620 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.829697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.829714 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.829740 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.829759 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.932903 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.932958 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.932968 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.932984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.932995 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:23Z","lastTransitionTime":"2026-02-18T11:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.939543 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:14:06.916483861 +0000 UTC Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.973019 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.973090 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.973214 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:23 crc kubenswrapper[4922]: I0218 11:37:23.973392 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:23 crc kubenswrapper[4922]: E0218 11:37:23.973392 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:23 crc kubenswrapper[4922]: E0218 11:37:23.973599 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:23 crc kubenswrapper[4922]: E0218 11:37:23.973780 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:23 crc kubenswrapper[4922]: E0218 11:37:23.973915 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.036280 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.036327 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.036335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.036351 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.036385 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.139204 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.139281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.139304 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.139328 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.139347 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.242625 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.242693 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.242720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.242749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.242769 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.346698 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.346862 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.346883 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.346906 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.346961 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.450052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.450101 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.450110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.450129 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.450141 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.553641 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.553695 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.553713 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.553742 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.553756 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.657479 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.657580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.657603 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.657632 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.657652 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.760478 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.760532 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.760549 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.760574 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.760591 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.813525 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.813584 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.813603 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.813627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.813644 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: E0218 11:37:24.831626 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:24Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.836593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.836813 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.836941 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.837101 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.837230 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: E0218 11:37:24.859400 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:24Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.864593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.864636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.864646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.864662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.864672 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: E0218 11:37:24.882744 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:24Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.887034 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.887149 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.887176 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.887205 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.887230 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: E0218 11:37:24.909020 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:24Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.914787 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.914860 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.914877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.914902 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.914929 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: E0218 11:37:24.935326 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:24Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:24 crc kubenswrapper[4922]: E0218 11:37:24.935620 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.937911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.938001 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.938021 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.938042 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.938058 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:24Z","lastTransitionTime":"2026-02-18T11:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:24 crc kubenswrapper[4922]: I0218 11:37:24.940230 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:52:11.751949413 +0000 UTC Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.040672 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.040715 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.040727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.040744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.040756 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.143428 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.143494 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.143516 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.143543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.143563 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.246284 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.246434 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.246460 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.246490 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.246509 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.293159 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.294506 4922 scope.go:117] "RemoveContainer" containerID="a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.310545 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.336602 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.350033 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.350092 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.350113 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.350137 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.350155 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.356884 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.374389 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.388949 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.407283 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.427499 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.445849 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.452535 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.452568 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.452580 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.452597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.452609 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.460517 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.475929 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.502147 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.521715 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.537974 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.551245 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.556502 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.556542 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.556556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.556576 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.556589 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.576133 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"rking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0218 11:37:14.424128 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424192 6409 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424197 6409 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0218 11:37:14.424238 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 11:37:14.424249 6409 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 11:37:14.424254 6409 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424121 6409 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0218 11:37:14.423917 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.599804 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.615173 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:25Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.659892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.659942 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.659956 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.659978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.659992 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.762309 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.762383 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.762400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.762420 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.762436 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.865535 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.865577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.865586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.865600 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.865611 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.941330 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:02:18.993819763 +0000 UTC Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.968278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.968348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.968428 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.968450 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.968461 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:25Z","lastTransitionTime":"2026-02-18T11:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.972759 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.972830 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.972820 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:25 crc kubenswrapper[4922]: I0218 11:37:25.972761 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:25 crc kubenswrapper[4922]: E0218 11:37:25.972896 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:25 crc kubenswrapper[4922]: E0218 11:37:25.973010 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:25 crc kubenswrapper[4922]: E0218 11:37:25.973130 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:25 crc kubenswrapper[4922]: E0218 11:37:25.973229 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.070771 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.070824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.070847 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.070875 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.070894 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.156236 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.167263 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.173976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.174035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.174046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.174066 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.174080 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.184164 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"rking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0218 11:37:14.424128 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424192 6409 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424197 6409 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0218 11:37:14.424238 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 11:37:14.424249 6409 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 11:37:14.424254 6409 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424121 6409 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0218 11:37:14.423917 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.209647 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.224134 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.241628 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.253970 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.263289 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.272183 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.275772 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.275807 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.275818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.275836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.275847 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.288025 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.301886 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.314727 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.324860 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.335969 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.345636 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.355532 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.362639 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/2.log" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.363304 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/1.log" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.366057 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353" exitCode=1 Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.366133 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.366194 4922 scope.go:117] "RemoveContainer" containerID="a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.366994 4922 scope.go:117] "RemoveContainer" containerID="9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353" Feb 18 11:37:26 crc kubenswrapper[4922]: E0218 11:37:26.367172 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.369544 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.378040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.378065 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.378073 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.378087 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.378096 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.383549 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.396080 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.407492 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.423638 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.436041 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.447886 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.457472 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.467089 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.476170 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.480014 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.480035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.480067 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.480081 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.480088 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.488721 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.499718 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.510028 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.520746 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.531979 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.548136 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"rking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0218 11:37:14.424128 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424192 6409 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424197 6409 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0218 11:37:14.424238 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 11:37:14.424249 6409 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 11:37:14.424254 6409 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424121 6409 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0218 11:37:14.423917 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.566834 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.577867 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.582020 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.582051 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.582062 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.582076 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.582086 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.588561 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.599348 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.609966 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:26Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.684673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.684719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.684736 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.684757 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.684770 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.787571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.787649 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.787681 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.787711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.787736 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.891107 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.891496 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.891645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.891806 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.891938 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.941753 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:17:53.987306455 +0000 UTC Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.995228 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.995291 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.995309 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.995334 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:26 crc kubenswrapper[4922]: I0218 11:37:26.995350 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:26Z","lastTransitionTime":"2026-02-18T11:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.098130 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.098215 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.098233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.098734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.098793 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.202194 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.202241 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.202260 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.202281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.202298 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.304819 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.304858 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.304869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.304884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.304894 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.369273 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/2.log" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.407689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.407726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.407734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.407750 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.407759 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.511474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.511540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.511602 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.511627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.511641 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.614774 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.614835 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.614850 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.614886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.614898 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.717203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.717274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.717293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.717316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.717332 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.820495 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.820537 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.820547 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.820562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.820574 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.923798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.923848 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.923859 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.923877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.923891 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:27Z","lastTransitionTime":"2026-02-18T11:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.942646 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:05:58.245849871 +0000 UTC Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.972343 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.972468 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.972355 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:27 crc kubenswrapper[4922]: E0218 11:37:27.972571 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:27 crc kubenswrapper[4922]: I0218 11:37:27.972654 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:27 crc kubenswrapper[4922]: E0218 11:37:27.972730 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:27 crc kubenswrapper[4922]: E0218 11:37:27.972851 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:27 crc kubenswrapper[4922]: E0218 11:37:27.972999 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.026722 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.026793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.026815 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.026851 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.026870 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.129598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.129674 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.129699 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.129773 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.129791 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.232326 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.232399 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.232415 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.232432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.232444 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.336326 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.336457 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.336512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.336560 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.336587 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.440355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.440518 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.440542 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.440572 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.440594 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.542942 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.543028 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.543061 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.543094 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.543117 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.646240 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.646325 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.646426 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.646456 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.646476 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.749222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.749280 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.749295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.749313 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.749326 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.851834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.851886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.851899 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.851917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.851930 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.943527 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:02:39.976614463 +0000 UTC Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.954834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.954889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.954902 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.954919 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.954931 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:28Z","lastTransitionTime":"2026-02-18T11:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:28 crc kubenswrapper[4922]: I0218 11:37:28.991529 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:28Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.005498 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.023389 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.039501 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.055080 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.056952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.057022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.057037 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.057059 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.057098 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.087319 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"rking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0218 11:37:14.424128 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424192 6409 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424197 6409 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0218 11:37:14.424238 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 11:37:14.424249 6409 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 11:37:14.424254 6409 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424121 6409 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0218 11:37:14.423917 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.098979 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.116120 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.133441 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.149134 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.160615 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.160674 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.160685 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.160703 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.160715 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.163267 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.176768 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.194311 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.209929 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.226022 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.242765 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.259237 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.262702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.262737 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.262748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.262766 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.262779 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.274619 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:29Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.365758 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.365824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.365835 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.365859 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.365874 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.468544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.468607 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.468624 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.468652 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.468670 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.572637 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.572694 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.572716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.572748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.572772 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.593327 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:29 crc kubenswrapper[4922]: E0218 11:37:29.593548 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:29 crc kubenswrapper[4922]: E0218 11:37:29.593674 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs podName:4702cf45-b47b-4291-a553-5bfc7bc22674 nodeName:}" failed. No retries permitted until 2026-02-18 11:37:45.593647664 +0000 UTC m=+67.321351774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs") pod "network-metrics-daemon-pspfr" (UID: "4702cf45-b47b-4291-a553-5bfc7bc22674") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.675613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.675688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.675699 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.675715 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.675724 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.778571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.778638 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.778662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.778696 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.778724 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.883001 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.883066 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.883087 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.883114 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.883135 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.943737 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:53:01.649068939 +0000 UTC Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.973100 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:29 crc kubenswrapper[4922]: E0218 11:37:29.973355 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.973598 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.973683 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:29 crc kubenswrapper[4922]: E0218 11:37:29.973810 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.973627 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:29 crc kubenswrapper[4922]: E0218 11:37:29.973972 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:29 crc kubenswrapper[4922]: E0218 11:37:29.974166 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.986769 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.986828 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.986851 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.986880 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:29 crc kubenswrapper[4922]: I0218 11:37:29.986903 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:29Z","lastTransitionTime":"2026-02-18T11:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.089776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.089851 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.089870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.089896 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.089916 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.192959 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.193050 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.193075 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.193103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.193123 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.296577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.296700 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.296720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.296748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.296765 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.400073 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.400127 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.400146 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.400174 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.400214 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.503134 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.503186 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.503200 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.503223 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.503238 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.606518 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.606623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.606641 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.606663 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.606680 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.709638 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.709756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.709776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.709801 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.709815 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.812466 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.812536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.812549 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.812569 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.812585 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.915549 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.915618 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.915638 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.915662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.915680 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:30Z","lastTransitionTime":"2026-02-18T11:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:30 crc kubenswrapper[4922]: I0218 11:37:30.943956 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:17:36.510236309 +0000 UTC Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.018759 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.018800 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.018809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.018824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.018835 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.121957 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.122028 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.122048 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.122075 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.122096 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.225224 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.225274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.225288 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.225308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.225321 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.328218 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.328266 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.328277 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.328294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.328306 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.431260 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.431319 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.431333 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.431353 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.431387 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.534703 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.534793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.534816 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.534846 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.534868 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.637844 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.637936 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.637964 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.637990 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.638008 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.740526 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.740570 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.740585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.740605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.740621 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.818007 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.818254 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:38:03.818229974 +0000 UTC m=+85.545934064 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.843230 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.843286 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.843308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.843331 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.843346 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.919419 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.919514 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.919581 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919613 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.919624 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919649 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919666 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919710 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919732 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:38:03.919709967 +0000 UTC m=+85.647414057 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919737 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919755 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919795 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:38:03.919781808 +0000 UTC m=+85.647485898 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919839 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919874 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919956 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:38:03.919928642 +0000 UTC m=+85.647632772 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.919995 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:38:03.919975453 +0000 UTC m=+85.647679583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.944117 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:15:39.199787056 +0000 UTC Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.946148 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.946206 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.946229 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.946259 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.946281 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:31Z","lastTransitionTime":"2026-02-18T11:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.972587 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.972712 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.972587 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.972771 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:31 crc kubenswrapper[4922]: I0218 11:37:31.972609 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.972922 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.973037 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:31 crc kubenswrapper[4922]: E0218 11:37:31.973176 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.049322 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.049401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.049421 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.049449 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.049464 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.152583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.152641 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.152656 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.152678 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.152694 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.256265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.256329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.256340 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.256407 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.256423 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.359468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.359523 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.359535 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.359555 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.359569 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.463237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.463279 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.463295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.463316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.463329 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.566142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.566231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.566250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.566272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.566286 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.668527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.668755 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.668816 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.668906 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.668967 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.772016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.772084 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.772095 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.772112 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.772124 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.874199 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.874232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.874242 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.874260 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.874271 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.944508 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:12:38.291885881 +0000 UTC Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.977049 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.977090 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.977107 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.977129 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:32 crc kubenswrapper[4922]: I0218 11:37:32.977145 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:32Z","lastTransitionTime":"2026-02-18T11:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.079776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.079836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.079853 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.079878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.079893 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.182208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.182534 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.182712 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.182849 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.182967 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.285166 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.285200 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.285213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.285225 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.285235 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.392218 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.392473 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.392484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.392499 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.392510 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.495524 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.495637 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.495666 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.495702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.495732 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.600805 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.600871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.600884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.600903 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.600917 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.703983 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.704042 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.704056 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.704077 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.704092 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.806898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.806951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.806961 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.806976 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.806985 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.909004 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.909072 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.909101 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.909129 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.909147 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:33Z","lastTransitionTime":"2026-02-18T11:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.945606 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:53:40.938791194 +0000 UTC Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.972918 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.973000 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:33 crc kubenswrapper[4922]: E0218 11:37:33.973079 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:33 crc kubenswrapper[4922]: E0218 11:37:33.973157 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.973309 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:33 crc kubenswrapper[4922]: I0218 11:37:33.973797 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:33 crc kubenswrapper[4922]: E0218 11:37:33.973940 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:33 crc kubenswrapper[4922]: E0218 11:37:33.974060 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.011972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.012014 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.012023 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.012037 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.012047 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.115044 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.115091 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.115103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.115119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.115131 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.218234 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.218282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.218295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.218318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.218331 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.322126 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.322187 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.322205 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.322231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.322249 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.425401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.425612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.425643 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.425676 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.425701 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.528769 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.528840 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.528856 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.528880 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.528898 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.631772 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.631844 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.631869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.631896 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.631917 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.734704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.734773 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.734786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.734817 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.734835 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.838215 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.838290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.838311 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.838339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.838404 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.942139 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.942202 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.942214 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.942237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.942251 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:34Z","lastTransitionTime":"2026-02-18T11:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:34 crc kubenswrapper[4922]: I0218 11:37:34.946245 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:44:15.432524908 +0000 UTC Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.041268 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.041339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.041405 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.041440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.041462 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.065332 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.071667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.071744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.071758 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.071783 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.071797 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.093413 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.103142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.103223 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.103242 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.103269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.103288 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.122648 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.128220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.128504 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.128523 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.128548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.128567 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.145808 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.151251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.151313 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.151335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.151393 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.151416 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.178788 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.178931 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.182002 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.182064 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.182079 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.182100 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.182117 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.284792 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.284829 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.284837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.284852 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.284861 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.387865 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.387957 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.388026 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.388062 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.388085 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.491173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.491214 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.491226 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.491243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.491255 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.594401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.594474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.594488 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.594512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.594526 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.698019 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.698101 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.698128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.698157 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.698176 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.801909 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.801985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.802011 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.802039 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.802059 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.904704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.904748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.904762 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.904779 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.904791 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:35Z","lastTransitionTime":"2026-02-18T11:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.946764 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:17:54.475449117 +0000 UTC Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.972440 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.972509 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.972580 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:35 crc kubenswrapper[4922]: I0218 11:37:35.972459 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.972582 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.972703 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.972877 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:35 crc kubenswrapper[4922]: E0218 11:37:35.973223 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.009086 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.009143 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.009152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.009168 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.009180 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.112309 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.112400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.112424 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.112446 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.112461 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.215776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.215841 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.215866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.215894 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.215914 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.319711 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.319821 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.319846 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.319871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.319892 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.423549 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.423612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.423623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.423643 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.423701 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.525994 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.526068 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.526105 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.526138 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.526163 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.629492 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.629542 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.629561 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.629585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.629603 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.733332 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.733436 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.733461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.733491 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.733512 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.836205 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.836266 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.836285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.836308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.836327 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.939733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.940013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.940090 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.940190 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.940250 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:36Z","lastTransitionTime":"2026-02-18T11:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.947337 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:43:00.85248585 +0000 UTC Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.966451 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.973047 4922 scope.go:117] "RemoveContainer" containerID="9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353" Feb 18 11:37:36 crc kubenswrapper[4922]: E0218 11:37:36.973213 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.982302 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:36 crc kubenswrapper[4922]: I0218 11:37:36.996309 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.007184 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.019716 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.032515 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.042710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.042738 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.042747 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.042760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.042769 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.045137 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.058856 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.072650 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.088966 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.103345 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.116339 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.136988 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.146841 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.146877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.146908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.146924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.146934 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.161180 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a72b7b3feb4b1eda6d0a688e2f4f1dbb05dbf932f4e786b6e414f147964ce746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"message\\\":\\\"rking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0218 11:37:14.424128 6409 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424192 6409 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424197 6409 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0218 11:37:14.424238 6409 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 11:37:14.424249 6409 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 11:37:14.424254 6409 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 11:37:14.424121 6409 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0218 11:37:14.423917 6409 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.189850 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.211206 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.224727 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.242941 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.249914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.249972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.249987 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.250009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.250023 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.260869 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.273927 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.295296 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.352644 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.352683 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.352695 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.352712 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.352729 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.352859 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.375110 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.395379 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.410965 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.423245 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.437734 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.452830 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.455774 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.455824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.455836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.455855 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.455867 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.475526 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.495560 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.511929 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.526302 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.542430 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.556610 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.558456 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.558498 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.558507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.558536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.558545 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.570253 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.581268 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.594944 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.662233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.662324 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.662348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.662410 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.662435 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.765090 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.765138 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.765149 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.765165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.765175 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.868346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.868491 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.868517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.868548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.868570 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.947879 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:16:39.122661884 +0000 UTC Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.971807 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.971878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.971899 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.971922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.971942 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:37Z","lastTransitionTime":"2026-02-18T11:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.972569 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.972764 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:37 crc kubenswrapper[4922]: E0218 11:37:37.972907 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.972951 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:37 crc kubenswrapper[4922]: E0218 11:37:37.973279 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:37 crc kubenswrapper[4922]: E0218 11:37:37.973488 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:37 crc kubenswrapper[4922]: I0218 11:37:37.973623 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:37 crc kubenswrapper[4922]: E0218 11:37:37.973807 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.075598 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.075686 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.075713 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.075745 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.075769 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.178165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.178622 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.178833 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.179024 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.179209 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.282468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.282544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.282577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.282612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.282637 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.385285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.385514 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.385544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.385575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.385600 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.489766 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.489837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.489861 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.489894 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.489917 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.593072 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.593132 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.593152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.593175 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.593191 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.696004 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.696088 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.696110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.696145 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.696166 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.799049 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.799122 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.799140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.799167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.799186 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.902202 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.902297 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.902316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.902340 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.902387 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:38Z","lastTransitionTime":"2026-02-18T11:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:38 crc kubenswrapper[4922]: I0218 11:37:38.948109 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:49:59.265596094 +0000 UTC Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.004963 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.005569 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.005585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.005605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.005618 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.006736 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.028772 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.049516 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.067809 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.088793 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.110279 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.110759 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.110967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.111151 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.111286 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.127509 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.145568 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.169532 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.190012 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.215709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.215821 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.215840 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.215867 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.215884 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.216519 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.234302 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.248474 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.265334 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.286464 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.300741 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.318395 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.318692 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.318826 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.319045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.319198 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.320940 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.336931 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.353141 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:39Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.421801 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.421853 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.421871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.421898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.421916 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.524411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.524754 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.524938 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.525119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.525290 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.628173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.628232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.628258 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.628290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.628315 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.731924 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.731971 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.731982 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.732001 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.732013 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.834586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.834644 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.834660 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.834682 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.834701 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.937160 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.937210 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.937223 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.937243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.937255 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:39Z","lastTransitionTime":"2026-02-18T11:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.948536 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:38:43.179070188 +0000 UTC Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.972256 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.972277 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:39 crc kubenswrapper[4922]: E0218 11:37:39.972392 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.972438 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:39 crc kubenswrapper[4922]: I0218 11:37:39.972458 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:39 crc kubenswrapper[4922]: E0218 11:37:39.972633 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:39 crc kubenswrapper[4922]: E0218 11:37:39.972699 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:39 crc kubenswrapper[4922]: E0218 11:37:39.972785 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.040076 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.040136 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.040153 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.040178 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.040196 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.144115 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.144172 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.144188 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.144216 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.144236 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.246964 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.247024 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.247045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.247081 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.247105 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.349879 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.349934 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.349946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.349967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.349983 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.452854 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.452928 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.452952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.452986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.453009 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.557265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.557349 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.557400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.557433 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.557459 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.660629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.660690 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.660707 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.660730 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.660747 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.763076 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.763184 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.763205 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.763233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.763252 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.866594 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.866632 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.866654 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.866672 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.866705 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.949733 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:53:52.234772539 +0000 UTC Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.970217 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.970259 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.970272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.970290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:40 crc kubenswrapper[4922]: I0218 11:37:40.970303 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:40Z","lastTransitionTime":"2026-02-18T11:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.072790 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.072849 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.072867 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.072891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.072909 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.177146 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.177211 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.177233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.177261 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.177284 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.279978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.280103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.280119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.280141 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.280157 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.382381 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.382437 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.382448 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.382467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.382479 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.484880 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.484934 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.484951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.484974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.484991 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.587299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.587329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.587337 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.587348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.587385 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.690679 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.690728 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.690744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.690764 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.690781 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.792931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.792960 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.792969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.792984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.792994 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.895934 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.895969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.895979 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.895995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.896019 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.950166 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:52:31.021065778 +0000 UTC Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.972800 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:41 crc kubenswrapper[4922]: E0218 11:37:41.972945 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.973185 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:41 crc kubenswrapper[4922]: E0218 11:37:41.973264 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.973480 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:41 crc kubenswrapper[4922]: E0218 11:37:41.973574 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.973931 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:41 crc kubenswrapper[4922]: E0218 11:37:41.974092 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.998451 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.998491 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.998500 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.998514 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:41 crc kubenswrapper[4922]: I0218 11:37:41.998523 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:41Z","lastTransitionTime":"2026-02-18T11:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.101547 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.101589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.101606 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.101628 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.101647 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.205160 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.205216 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.205237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.205264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.205281 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.309416 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.309496 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.309512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.309538 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.309561 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.412629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.412678 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.412694 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.412716 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.412732 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.515745 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.515791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.515802 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.515819 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.515831 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.618429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.618481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.618490 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.618504 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.618512 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.721729 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.721824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.721848 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.721878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.721898 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.826151 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.826232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.826250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.826280 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.826302 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.929477 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.929585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.929606 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.929642 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.929668 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:42Z","lastTransitionTime":"2026-02-18T11:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:42 crc kubenswrapper[4922]: I0218 11:37:42.951003 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:50:39.83676961 +0000 UTC Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.031922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.031969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.031980 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.031995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.032006 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.133950 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.135103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.135181 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.135197 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.135216 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.237770 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.237816 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.237827 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.237842 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.237854 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.340021 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.340056 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.340067 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.340083 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.340094 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.441858 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.441892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.441903 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.441918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.441928 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.544517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.544550 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.544558 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.544571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.544580 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.647668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.647710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.647726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.647740 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.647750 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.750556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.750610 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.750626 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.750652 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.750669 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.853379 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.853430 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.853442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.853459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.853471 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.951602 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:09:47.664911598 +0000 UTC Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.956319 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.956425 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.956453 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.956485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.956507 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:43Z","lastTransitionTime":"2026-02-18T11:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.972737 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:43 crc kubenswrapper[4922]: E0218 11:37:43.972877 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.973111 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:43 crc kubenswrapper[4922]: E0218 11:37:43.973173 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.973827 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:43 crc kubenswrapper[4922]: I0218 11:37:43.974594 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:43 crc kubenswrapper[4922]: E0218 11:37:43.975273 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:43 crc kubenswrapper[4922]: E0218 11:37:43.974663 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.060029 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.060095 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.060117 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.060142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.060159 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.163208 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.163253 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.163265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.163281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.163295 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.265878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.265920 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.265933 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.265949 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.265966 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.369036 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.369093 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.369110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.369132 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.369150 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.471485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.471528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.471539 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.471557 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.471569 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.574136 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.574171 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.574182 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.574197 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.574209 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.676648 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.676682 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.676693 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.676709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.676722 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.779700 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.779765 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.779776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.779799 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.779812 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.882438 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.882493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.882506 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.882524 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.882535 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.952688 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:07:01.625055123 +0000 UTC Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.984926 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.984970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.984980 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.984993 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:44 crc kubenswrapper[4922]: I0218 11:37:44.985004 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:44Z","lastTransitionTime":"2026-02-18T11:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.087194 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.087256 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.087267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.087290 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.087303 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.190051 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.190125 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.190147 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.190178 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.190196 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.293071 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.293107 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.293118 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.293134 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.293145 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.356073 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.356148 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.356163 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.356182 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.356193 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.372855 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.376946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.376973 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.376983 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.376995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.377004 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.389866 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.397583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.397621 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.397657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.397678 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.397690 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.413537 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.418262 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.418329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.418341 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.418373 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.418385 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.441863 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.445888 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.445932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.445947 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.445970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.445987 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.462487 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.462661 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.464285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.464329 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.464343 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.464385 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.464404 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.567756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.568297 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.568495 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.568647 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.568797 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.671855 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.671917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.671927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.671947 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.671961 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.675666 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.675983 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.676118 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs podName:4702cf45-b47b-4291-a553-5bfc7bc22674 nodeName:}" failed. No retries permitted until 2026-02-18 11:38:17.676086113 +0000 UTC m=+99.403790233 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs") pod "network-metrics-daemon-pspfr" (UID: "4702cf45-b47b-4291-a553-5bfc7bc22674") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.774496 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.774550 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.774560 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.774579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.774592 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.876600 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.876644 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.876662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.876677 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.876688 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.953719 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:55:41.827592233 +0000 UTC Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.973018 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.973051 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.973116 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.973036 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.973158 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.973347 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.973450 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:45 crc kubenswrapper[4922]: E0218 11:37:45.973546 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.982742 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.982806 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.982815 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.982831 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:45 crc kubenswrapper[4922]: I0218 11:37:45.982841 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:45Z","lastTransitionTime":"2026-02-18T11:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.085279 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.085343 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.085377 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.085397 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.085412 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.188053 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.188103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.188115 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.188134 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.188149 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.291514 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.291577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.291590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.291612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.291624 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.394411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.394516 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.394545 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.394636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.394663 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.497318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.497395 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.497410 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.497429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.497444 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.599759 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.599850 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.599870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.599903 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.599921 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.703697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.703737 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.703746 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.703760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.703771 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.806863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.806932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.806946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.806970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.806983 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.937384 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.937444 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.937457 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.937474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.937487 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:46Z","lastTransitionTime":"2026-02-18T11:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:46 crc kubenswrapper[4922]: I0218 11:37:46.954785 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:42:56.206733582 +0000 UTC Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.040662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.040735 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.040760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.040795 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.040821 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.143992 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.144031 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.144042 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.144057 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.144071 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.246528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.246571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.246581 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.246596 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.246606 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.349568 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.349607 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.349617 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.349634 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.349646 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.451675 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.451739 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.451757 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.451782 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.451800 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.554257 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.554303 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.554312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.554330 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.554339 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.657773 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.657846 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.657863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.657888 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.657908 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.761107 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.761154 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.761168 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.761189 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.761201 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.865943 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.866001 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.866015 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.866041 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.866067 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.954981 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:38:52.949408249 +0000 UTC Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.968791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.968844 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.968857 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.968876 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.968889 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:47Z","lastTransitionTime":"2026-02-18T11:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.972524 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.972565 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.972639 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:47 crc kubenswrapper[4922]: I0218 11:37:47.972551 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:47 crc kubenswrapper[4922]: E0218 11:37:47.972692 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:47 crc kubenswrapper[4922]: E0218 11:37:47.972811 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:47 crc kubenswrapper[4922]: E0218 11:37:47.972903 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:47 crc kubenswrapper[4922]: E0218 11:37:47.972954 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.071456 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.071516 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.071533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.071560 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.071577 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.174166 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.174213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.174222 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.174238 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.174251 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.277469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.277528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.277540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.277557 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.277570 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.379727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.379786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.379803 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.379827 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.379849 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.453414 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/0.log" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.453463 4922 generic.go:334] "Generic (PLEG): container finished" podID="9b4595ac-c521-4ada-950d-e1b01cdff99b" containerID="83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df" exitCode=1 Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.453501 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c9xzd" event={"ID":"9b4595ac-c521-4ada-950d-e1b01cdff99b","Type":"ContainerDied","Data":"83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.454014 4922 scope.go:117] "RemoveContainer" containerID="83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.463571 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.477090 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.481433 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.481471 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.481516 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.481531 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.481540 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.491561 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.509902 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.523826 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.537936 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.548880 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.560921 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.575632 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.584235 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.584283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.584294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.584311 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.584322 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.591855 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.610629 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.622331 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.647631 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.665147 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.679218 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.686502 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.686528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.686536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.686550 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.686561 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.691740 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.702704 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.713335 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.789524 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.789565 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.789575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.789592 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.789602 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.892742 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.892821 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.892846 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.892890 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.892913 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.955925 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:59:43.130843761 +0000 UTC Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.985697 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.995689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.995726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.995738 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.995755 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.995767 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:48Z","lastTransitionTime":"2026-02-18T11:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:48 crc kubenswrapper[4922]: I0218 11:37:48.999913 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.013010 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.025193 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.043724 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.066616 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.081727 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.095087 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.098423 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.098476 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.098495 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.098520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.098544 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.109383 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.123158 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.137024 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.148148 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.158851 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.170917 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.199176 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.200447 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.200477 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.200489 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.200511 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.200525 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.224353 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.242211 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.258767 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.303269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.303294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.303302 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.303315 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.303324 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.404961 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.404995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.405003 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.405018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.405027 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.458134 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/0.log" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.458194 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c9xzd" event={"ID":"9b4595ac-c521-4ada-950d-e1b01cdff99b","Type":"ContainerStarted","Data":"71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.473261 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.488418 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.502878 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.506879 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.506911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.506923 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.506940 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.506952 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.512470 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.521412 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.531188 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.542232 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.552283 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.562954 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.573592 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.583622 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.600016 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.609986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.610016 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.610026 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.610039 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.610048 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.611669 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.621718 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.630700 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.642438 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.658652 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.668508 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.713304 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.713379 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.713389 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.713432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.713442 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.816800 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.816843 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.816851 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.816866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.816876 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.919274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.919321 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.919331 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.919348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.919375 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:49Z","lastTransitionTime":"2026-02-18T11:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.957172 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:46:03.022797714 +0000 UTC Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.972444 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.972507 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.972539 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:49 crc kubenswrapper[4922]: I0218 11:37:49.972543 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:49 crc kubenswrapper[4922]: E0218 11:37:49.972810 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:49 crc kubenswrapper[4922]: E0218 11:37:49.972919 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:49 crc kubenswrapper[4922]: E0218 11:37:49.973037 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:49 crc kubenswrapper[4922]: E0218 11:37:49.973143 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.021491 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.021530 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.021539 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.021553 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.021563 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.123761 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.123801 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.123810 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.123824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.123835 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.226071 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.226121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.226133 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.226150 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.226162 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.367001 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.367035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.367045 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.367062 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.367074 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.471850 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.471884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.471895 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.471909 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.471917 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.575771 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.575806 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.575815 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.575836 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.575845 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.678306 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.678388 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.678403 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.678420 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.678432 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.781510 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.781585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.781610 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.781641 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.781662 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.884677 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.884743 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.884754 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.884772 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.884782 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.958189 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:54:26.043990047 +0000 UTC Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.974161 4922 scope.go:117] "RemoveContainer" containerID="9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.987212 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.987243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.987252 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.987266 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:50 crc kubenswrapper[4922]: I0218 11:37:50.987275 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:50Z","lastTransitionTime":"2026-02-18T11:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.090287 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.090320 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.090335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.090351 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.090378 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.192823 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.192891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.192900 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.192915 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.192925 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.295170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.295203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.295213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.295231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.295242 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.397527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.397571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.397581 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.397599 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.397609 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.464819 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/2.log" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.467326 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.467858 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.484146 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.498964 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.507686 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.507748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.507763 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.507789 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.507806 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.522101 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.535343 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.549280 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.561832 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.573683 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.590198 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.607547 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.610110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.610142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.610151 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.610166 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.610179 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.623028 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.636036 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.658490 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.685407 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.700928 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.712825 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.712878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.712889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.712910 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.712923 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.718411 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.730885 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.747820 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.760289 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.815283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.815336 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.815351 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.815390 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.815402 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.918623 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.918683 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.918704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.918730 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.918748 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:51Z","lastTransitionTime":"2026-02-18T11:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.959190 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:13:41.319725541 +0000 UTC Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.972581 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.972649 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.972690 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:51 crc kubenswrapper[4922]: I0218 11:37:51.972599 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:51 crc kubenswrapper[4922]: E0218 11:37:51.972755 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:51 crc kubenswrapper[4922]: E0218 11:37:51.972853 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:51 crc kubenswrapper[4922]: E0218 11:37:51.972991 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:51 crc kubenswrapper[4922]: E0218 11:37:51.973200 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.021838 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.021881 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.021892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.021908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.021919 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.124199 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.124254 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.124275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.124302 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.124320 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.227299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.227380 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.227393 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.227415 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.227430 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.330278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.330392 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.330419 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.330454 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.330483 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.436493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.436543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.436567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.436590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.436605 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.475221 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/3.log" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.476578 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/2.log" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.479918 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" exitCode=1 Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.479989 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.480045 4922 scope.go:117] "RemoveContainer" containerID="9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.480926 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:37:52 crc kubenswrapper[4922]: E0218 11:37:52.481161 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.500325 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.520995 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.541965 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.542023 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.542036 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.542060 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.542073 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.545524 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.568518 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.585814 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.603140 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.624014 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.637690 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.645704 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.645734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.645742 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.645757 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.645770 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.657145 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.674105 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.688727 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.714643 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.750190 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.751970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.752012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.752025 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.752044 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.752057 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.769085 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.788518 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.801996 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.816556 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.839756 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c0bf0b715f9f63ac45a7294c1ecd769e8b753eba698432388e02092cd5fb353\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"message\\\":\\\"andler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 11:37:26.126150 6616 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:37:26.126155 6616 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126204 6616 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 11:37:26.126216 6616 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 11:37:26.126228 6616 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:37:26.126241 6616 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:37:26.126248 6616 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:37:26.126253 6616 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:37:26.126261 6616 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:37:26.126270 6616 factory.go:656] Stopping watch factory\\\\nI0218 11:37:26.126287 6616 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:37:26.126604 6616 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 11:37:26.126749 6616 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 11:37:26.126791 6616 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:26.126827 6616 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:26.126933 6616 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:51Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:141\\\\nI0218 11:37:51.742382 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.742628 6997 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.742905 6997 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.743039 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.743155 6997 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.744128 6997 factory.go:656] Stopping watch factory\\\\nI0218 11:37:51.758771 6997 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0218 11:37:51.758806 6997 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0218 11:37:51.758875 6997 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:51.758908 6997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:51.759034 6997 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.855201 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.855258 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.855276 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.855296 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.855312 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.958012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.958067 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.958082 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.958104 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.958118 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:52Z","lastTransitionTime":"2026-02-18T11:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:52 crc kubenswrapper[4922]: I0218 11:37:52.960301 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:29:15.226446896 +0000 UTC Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.062795 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.062860 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.062870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.062891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.062905 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.166121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.166175 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.166185 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.166204 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.166216 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.269075 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.269154 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.269179 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.269212 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.269236 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.373394 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.373493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.373515 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.373567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.373597 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.476869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.476927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.476939 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.476959 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.476973 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.486390 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/3.log" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.490413 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:37:53 crc kubenswrapper[4922]: E0218 11:37:53.491947 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.510526 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.545888 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:51Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:141\\\\nI0218 11:37:51.742382 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.742628 6997 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.742905 6997 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.743039 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.743155 6997 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.744128 6997 factory.go:656] Stopping watch factory\\\\nI0218 11:37:51.758771 6997 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0218 11:37:51.758806 6997 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0218 11:37:51.758875 6997 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:51.758908 6997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:51.759034 6997 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.575200 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.580098 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.580236 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.580246 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.580265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.580275 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.595040 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.611967 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.625012 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.638756 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.658563 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.669692 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.683392 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.683440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.683451 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.683468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.683480 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.687729 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.701978 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.719050 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.734202 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.749051 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.761471 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.780747 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.796411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.796588 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.796677 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.796710 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.796745 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.799565 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.813715 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.899132 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.899163 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.899173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.899189 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.899199 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:53Z","lastTransitionTime":"2026-02-18T11:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.961291 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:53:53.286042603 +0000 UTC Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.972345 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:53 crc kubenswrapper[4922]: E0218 11:37:53.972519 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.972744 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:53 crc kubenswrapper[4922]: E0218 11:37:53.972801 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.972929 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:53 crc kubenswrapper[4922]: E0218 11:37:53.973003 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:53 crc kubenswrapper[4922]: I0218 11:37:53.973216 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:53 crc kubenswrapper[4922]: E0218 11:37:53.973296 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.002461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.002517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.002528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.002551 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.002564 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.104802 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.104841 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.104851 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.104870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.104881 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.207173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.207210 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.207218 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.207235 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.207246 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.311091 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.311137 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.311147 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.311164 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.311177 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.416813 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.416873 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.416883 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.416906 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.416926 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.520863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.520927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.520951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.520985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.521011 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.624348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.624473 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.624498 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.624530 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.624549 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.727085 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.727160 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.727178 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.727202 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.727223 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.830950 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.831032 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.831051 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.831088 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.831124 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.933918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.933969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.933981 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.934001 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.934017 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:54Z","lastTransitionTime":"2026-02-18T11:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:54 crc kubenswrapper[4922]: I0218 11:37:54.961818 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:44:28.962686355 +0000 UTC Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.036905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.036962 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.036978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.037000 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.037019 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.140426 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.140483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.140495 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.140518 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.140535 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.243559 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.243625 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.243641 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.243664 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.243679 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.347231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.347275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.347287 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.347306 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.347319 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.450730 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.450793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.450814 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.450834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.450846 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.520200 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.520289 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.520318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.520350 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.520436 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.545086 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.551269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.551321 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.551339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.551397 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.551415 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.570157 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.577009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.577088 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.577101 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.577118 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.577129 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.598729 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.604590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.604733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.604763 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.604816 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.604854 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.625980 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.630958 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.631002 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.631015 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.631032 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.631045 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.657073 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.657323 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.660255 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.660301 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.660316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.660340 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.660377 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.763645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.763709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.763720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.763741 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.763755 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.867272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.867398 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.867423 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.867454 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.867478 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.962971 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:15:00.593576208 +0000 UTC Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.970917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.971084 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.971189 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.971291 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.971390 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:55Z","lastTransitionTime":"2026-02-18T11:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.972270 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.972519 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.972781 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.972878 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.973131 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.973258 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:55 crc kubenswrapper[4922]: I0218 11:37:55.973444 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:55 crc kubenswrapper[4922]: E0218 11:37:55.973585 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.073909 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.073995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.074020 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.074054 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.074076 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.177077 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.177469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.177629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.177775 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.177919 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.281944 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.282493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.282793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.283036 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.283213 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.386103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.386420 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.386513 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.386600 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.386679 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.489771 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.489827 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.489847 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.489875 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.489898 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.593469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.593535 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.593552 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.593575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.593589 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.697156 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.697230 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.697294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.697316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.697326 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.800790 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.800873 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.800884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.800904 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.800919 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.904477 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.904551 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.904570 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.904599 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.904620 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:56Z","lastTransitionTime":"2026-02-18T11:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:56 crc kubenswrapper[4922]: I0218 11:37:56.964045 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:16:35.377900249 +0000 UTC Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.008389 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.008470 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.008485 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.008513 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.008535 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.111829 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.111920 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.111970 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.112012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.112040 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.216025 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.216127 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.216138 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.216158 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.216174 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.319481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.319533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.319544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.319561 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.319587 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.423544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.423631 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.423658 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.423693 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.423717 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.525929 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.525993 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.526007 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.526038 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.526053 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.628870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.628917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.628929 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.628946 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.628960 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.733930 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.734013 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.734035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.734066 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.734090 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.837747 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.837830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.837854 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.837886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.837912 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.941817 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.941890 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.941915 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.941941 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.941956 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:57Z","lastTransitionTime":"2026-02-18T11:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.964486 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:13:20.457629417 +0000 UTC Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.973091 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.973155 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.973192 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:57 crc kubenswrapper[4922]: I0218 11:37:57.973155 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:57 crc kubenswrapper[4922]: E0218 11:37:57.973399 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:57 crc kubenswrapper[4922]: E0218 11:37:57.973599 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:57 crc kubenswrapper[4922]: E0218 11:37:57.973951 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:37:57 crc kubenswrapper[4922]: E0218 11:37:57.974134 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.045182 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.045224 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.045234 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.045251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.045263 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.149158 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.149229 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.149248 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.149279 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.149330 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.252615 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.252703 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.252724 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.252746 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.252758 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.356012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.356343 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.356487 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.356509 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.356521 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.460547 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.460659 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.460707 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.460744 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.460768 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.565170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.565249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.565264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.565287 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.565302 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.668539 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.668609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.668627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.668655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.668675 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.773074 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.773136 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.773145 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.773165 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.773176 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.875870 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.875920 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.875933 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.875949 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.875962 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.965703 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:30:00.825208479 +0000 UTC Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.977964 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.978012 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.978023 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.978040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.978054 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:58Z","lastTransitionTime":"2026-02-18T11:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:58 crc kubenswrapper[4922]: I0218 11:37:58.991218 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.012083 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.027907 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.048122 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.066092 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.079964 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.079996 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.080006 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.080021 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.080032 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.085270 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.103157 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.117232 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.130218 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.141585 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.153453 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.167044 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.179914 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.182986 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.183026 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.183041 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.183061 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.183073 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.201637 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:51Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:141\\\\nI0218 11:37:51.742382 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.742628 6997 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.742905 6997 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.743039 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.743155 6997 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.744128 6997 factory.go:656] Stopping watch factory\\\\nI0218 11:37:51.758771 6997 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0218 11:37:51.758806 6997 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0218 11:37:51.758875 6997 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:51.758908 6997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:51.759034 6997 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.225440 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.236828 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.247879 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.256244 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:37:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.286228 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.286264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.286276 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.286289 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.286297 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.388562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.388873 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.388893 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.388917 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.388934 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.491267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.491326 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.491342 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.491383 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.491401 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.593901 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.593937 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.593945 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.593958 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.593967 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.696508 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.696554 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.696563 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.696581 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.696592 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.800670 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.800763 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.800790 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.800825 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.800849 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.904442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.904536 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.904565 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.904597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.904622 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:37:59Z","lastTransitionTime":"2026-02-18T11:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.966853 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:49:47.569054548 +0000 UTC Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.972226 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:37:59 crc kubenswrapper[4922]: E0218 11:37:59.972398 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.972454 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:37:59 crc kubenswrapper[4922]: E0218 11:37:59.972512 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.972515 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:37:59 crc kubenswrapper[4922]: I0218 11:37:59.972241 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:37:59 crc kubenswrapper[4922]: E0218 11:37:59.972727 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:37:59 crc kubenswrapper[4922]: E0218 11:37:59.972867 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.007179 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.007237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.007250 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.007269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.007281 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.110792 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.110862 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.110877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.110897 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.110911 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.214139 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.214183 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.214192 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.214206 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.214218 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.317481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.317528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.317543 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.317564 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.317580 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.421136 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.421227 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.421252 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.421283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.421306 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.523768 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.523837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.523854 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.523880 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.523898 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.626498 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.626559 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.626595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.626622 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.626642 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.729869 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.729938 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.729966 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.729995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.730015 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.833745 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.833794 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.833811 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.833835 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.833852 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.936939 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.936998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.937017 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.937040 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.937060 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:00Z","lastTransitionTime":"2026-02-18T11:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:00 crc kubenswrapper[4922]: I0218 11:38:00.967220 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:22:09.320796657 +0000 UTC Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.040237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.040352 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.040404 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.040434 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.040452 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.143293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.143337 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.143349 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.143380 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.143393 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.246228 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.246308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.246342 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.246403 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.246426 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.349964 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.350030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.350048 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.350073 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.350091 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.453096 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.453155 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.453167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.453186 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.453198 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.556052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.556107 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.556118 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.556137 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.556150 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.659891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.659963 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.659987 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.660017 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.660043 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.763307 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.763402 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.763421 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.763443 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.763460 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.866415 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.866467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.866486 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.866509 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.866529 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.967313 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:30:09.330633274 +0000 UTC Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.969413 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.969452 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.969468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.969490 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.969506 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:01Z","lastTransitionTime":"2026-02-18T11:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.972705 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:01 crc kubenswrapper[4922]: E0218 11:38:01.972905 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.973160 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:01 crc kubenswrapper[4922]: E0218 11:38:01.973264 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.973516 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:01 crc kubenswrapper[4922]: E0218 11:38:01.973622 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:01 crc kubenswrapper[4922]: I0218 11:38:01.973820 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:01 crc kubenswrapper[4922]: E0218 11:38:01.973914 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.072079 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.072128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.072140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.072159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.072172 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.174909 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.175019 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.175046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.175074 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.175095 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.278019 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.278083 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.278106 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.278141 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.278164 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.381335 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.381439 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.381466 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.381497 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.381515 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.484985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.485050 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.485075 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.485104 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.485127 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.588629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.588778 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.588810 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.588839 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.588859 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.691875 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.691941 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.691967 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.691995 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.692017 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.795267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.795520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.795560 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.795591 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.795612 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.898490 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.898550 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.898569 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.898595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.898612 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:02Z","lastTransitionTime":"2026-02-18T11:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:02 crc kubenswrapper[4922]: I0218 11:38:02.968345 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:23:41.430539238 +0000 UTC Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.001198 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.001299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.001317 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.001341 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.001357 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.104818 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.104907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.104916 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.104931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.104939 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.208537 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.208645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.208666 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.208690 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.208707 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.311434 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.311520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.311542 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.311571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.311596 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.415507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.415569 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.415590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.415618 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.415639 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.518073 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.518118 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.518130 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.518147 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.518159 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.620156 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.620200 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.620216 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.620233 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.620245 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.722688 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.722765 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.722796 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.722825 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.722847 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.825848 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.825907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.825936 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.825960 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.825977 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.884900 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.885290 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.885247782 +0000 UTC m=+149.612951892 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.929593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.929671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.929692 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.929719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.929741 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:03Z","lastTransitionTime":"2026-02-18T11:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.968555 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:55:29.993476677 +0000 UTC Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.972148 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.972402 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.972761 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.972915 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.973260 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.973425 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.973633 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.973728 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.986605 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.986658 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.986714 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:03 crc kubenswrapper[4922]: I0218 11:38:03.986753 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.986867 4922 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.986879 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.986915 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.986935 4922 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.986971 4922 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.986984 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.987031 4922 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.987051 4922 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.986938 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.986918817 +0000 UTC m=+149.714622927 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.987092 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.987071501 +0000 UTC m=+149.714775601 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.987110 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.987099872 +0000 UTC m=+149.714803962 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:38:03 crc kubenswrapper[4922]: E0218 11:38:03.987125 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.987117872 +0000 UTC m=+149.714821972 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.032887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.032927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.032945 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.032962 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.032972 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.137003 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.137096 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.137114 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.137183 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.137204 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.241272 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.241395 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.241426 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.241453 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.241471 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.345429 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.345496 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.345514 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.345544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.345563 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.454768 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.454906 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.454928 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.454957 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.454977 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.558835 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.558887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.558905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.558931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.558955 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.662530 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.662790 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.662803 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.662821 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.662833 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.766179 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.766274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.766295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.766320 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.766338 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.869887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.869990 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.870050 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.870077 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.870133 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.969640 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:32:35.473729346 +0000 UTC Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.974237 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.974297 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.974324 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.974399 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:04 crc kubenswrapper[4922]: I0218 11:38:04.974426 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:04Z","lastTransitionTime":"2026-02-18T11:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.078809 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.078883 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.078900 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.078925 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.078942 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.182709 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.182776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.182798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.182825 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.182845 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.285488 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.285537 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.285552 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.285573 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.285587 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.389712 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.389783 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.389942 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.389974 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.389992 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.493468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.493553 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.493571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.493592 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.493608 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.597044 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.597135 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.597158 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.597181 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.597199 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.700209 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.700267 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.700278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.700298 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.700312 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.803220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.803265 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.803278 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.803293 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.803304 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.906664 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.906719 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.906733 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.906752 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.906764 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:05Z","lastTransitionTime":"2026-02-18T11:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.970393 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:47:01.474552542 +0000 UTC Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.972795 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.972828 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.972799 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:05 crc kubenswrapper[4922]: I0218 11:38:05.972863 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:05 crc kubenswrapper[4922]: E0218 11:38:05.972994 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:05 crc kubenswrapper[4922]: E0218 11:38:05.973105 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:05 crc kubenswrapper[4922]: E0218 11:38:05.973213 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:05 crc kubenswrapper[4922]: E0218 11:38:05.973265 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.009577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.009671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.009690 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.009715 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.009735 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.022052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.022102 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.022119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.022143 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.022166 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: E0218 11:38:06.036676 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.042220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.042263 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.042279 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.042298 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.042314 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: E0218 11:38:06.054052 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.058883 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.058921 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.058934 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.058950 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.058969 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: E0218 11:38:06.077613 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.082994 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.083031 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.083043 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.083060 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.083072 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: E0218 11:38:06.101702 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.106992 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.107268 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.107292 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.107312 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.107335 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: E0218 11:38:06.128713 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:06Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:06 crc kubenswrapper[4922]: E0218 11:38:06.128953 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.131531 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.131605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.131628 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.131656 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.131674 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.235790 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.235862 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.235879 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.235905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.235926 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.339029 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.339078 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.339089 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.339104 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.339116 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.441958 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.442046 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.442064 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.442090 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.442108 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.544287 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.544398 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.544413 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.544436 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.544453 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.648347 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.648436 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.648454 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.648481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.648499 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.752080 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.752139 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.752152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.752173 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.752186 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.855593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.855659 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.855679 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.855705 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.855724 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.958743 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.958793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.958812 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.958833 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.958850 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:06Z","lastTransitionTime":"2026-02-18T11:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:06 crc kubenswrapper[4922]: I0218 11:38:06.973573 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:00:36.122546661 +0000 UTC Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.062341 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.062444 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.062474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.062505 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.062531 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.166065 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.166126 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.166145 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.166169 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.166188 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.269339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.269412 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.269425 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.269445 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.269459 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.373808 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.373857 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.373871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.373891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.373911 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.477146 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.477193 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.477209 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.477232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.477248 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.582450 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.582493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.582507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.582530 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.582543 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.685033 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.685069 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.685079 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.685093 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.685105 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.787528 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.787627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.787653 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.787686 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.787709 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.891228 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.891276 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.891318 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.891338 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.891352 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.972580 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.972793 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.972964 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.972978 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:07 crc kubenswrapper[4922]: E0218 11:38:07.973149 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:07 crc kubenswrapper[4922]: E0218 11:38:07.973752 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.973812 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:18:19.064430432 +0000 UTC Feb 18 11:38:07 crc kubenswrapper[4922]: E0218 11:38:07.973809 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:07 crc kubenswrapper[4922]: E0218 11:38:07.974496 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.975121 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:38:07 crc kubenswrapper[4922]: E0218 11:38:07.975454 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.994476 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.994533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.994552 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.994577 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:07 crc kubenswrapper[4922]: I0218 11:38:07.994600 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:07Z","lastTransitionTime":"2026-02-18T11:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.097739 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.097791 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.097803 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.097820 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.097833 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.200572 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.200675 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.200712 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.200755 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.200783 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.304433 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.304496 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.304512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.304532 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.304547 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.407702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.407749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.407760 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.407778 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.407790 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.511355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.511483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.511509 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.511540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.511568 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.615147 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.615246 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.615271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.615305 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.615334 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.718155 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.718230 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.718249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.718308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.718327 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.821393 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.821443 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.821457 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.821477 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.821490 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.924260 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.924326 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.924346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.924400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.924421 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:08Z","lastTransitionTime":"2026-02-18T11:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.974137 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:19:47.805369703 +0000 UTC Feb 18 11:38:08 crc kubenswrapper[4922]: I0218 11:38:08.991811 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:08Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.027726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.027779 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.027797 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.027817 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.027832 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.069792 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.091698 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.113596 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.128561 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.130650 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.130725 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.130748 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.130780 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.130804 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.142049 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.158990 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.175796 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.193659 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.219023 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.234298 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.234340 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.234424 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.234459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.234474 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.237523 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.252088 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.282715 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.321474 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.336106 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.336138 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.336148 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.336163 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.336171 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.347859 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.359379 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.370977 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.390840 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:51Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:141\\\\nI0218 11:37:51.742382 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.742628 6997 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.742905 6997 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.743039 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.743155 6997 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.744128 6997 factory.go:656] Stopping watch factory\\\\nI0218 11:37:51.758771 6997 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0218 11:37:51.758806 6997 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0218 11:37:51.758875 6997 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:51.758908 6997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:51.759034 6997 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:09Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.439533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.439575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.439584 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.439599 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.439609 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.543705 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.543784 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.543808 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.543839 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.543864 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.647622 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.647671 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.647686 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.647705 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.647720 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.751035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.751108 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.751125 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.751152 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.751172 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.854471 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.854583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.854609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.854639 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.854660 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.957993 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.958041 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.958054 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.958075 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.958089 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:09Z","lastTransitionTime":"2026-02-18T11:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.972384 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:09 crc kubenswrapper[4922]: E0218 11:38:09.972494 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.972693 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.972775 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:09 crc kubenswrapper[4922]: E0218 11:38:09.973297 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:09 crc kubenswrapper[4922]: E0218 11:38:09.973489 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.974017 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:09 crc kubenswrapper[4922]: E0218 11:38:09.974208 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:09 crc kubenswrapper[4922]: I0218 11:38:09.974423 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:41:54.036130102 +0000 UTC Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.061410 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.061468 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.061481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.061498 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.061511 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.164166 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.164212 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.164223 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.164240 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.164253 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.268121 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.268198 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.268220 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.268249 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.268277 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.372438 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.372506 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.372530 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.372567 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.372585 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.476789 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.476879 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.476907 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.476939 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.476957 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.580521 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.580591 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.580611 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.580636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.580657 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.683700 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.683765 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.683789 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.683824 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.683848 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.787545 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.787609 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.787631 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.787662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.787699 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.890955 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.891009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.891022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.891042 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.891055 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.975114 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:39:40.974995256 +0000 UTC Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.994128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.994201 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.994223 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.994251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:10 crc kubenswrapper[4922]: I0218 11:38:10.994269 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:10Z","lastTransitionTime":"2026-02-18T11:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.097720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.097799 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.097825 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.097859 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.097880 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.201310 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.201426 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.201451 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.201483 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.201534 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.304417 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.304463 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.304474 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.304490 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.304502 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.407761 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.407841 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.407866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.407897 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.407918 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.511519 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.511635 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.511662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.511702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.511727 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.614749 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.614835 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.614853 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.614878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.614896 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.718198 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.718404 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.718435 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.718461 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.718479 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.821622 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.821702 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.821724 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.821752 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.821774 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.924025 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.924075 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.924088 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.924103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.924119 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:11Z","lastTransitionTime":"2026-02-18T11:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.972259 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:11 crc kubenswrapper[4922]: E0218 11:38:11.972466 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.972631 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.972721 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:11 crc kubenswrapper[4922]: E0218 11:38:11.972962 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.972674 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:11 crc kubenswrapper[4922]: E0218 11:38:11.973310 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:11 crc kubenswrapper[4922]: E0218 11:38:11.973192 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.975626 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:38:46.750600678 +0000 UTC Feb 18 11:38:11 crc kubenswrapper[4922]: I0218 11:38:11.986304 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.027170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.027229 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.027246 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.027271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.027289 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.130540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.130606 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.130627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.130987 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.131075 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.233684 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.233726 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.233758 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.233774 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.233786 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.336430 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.336508 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.336533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.336565 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.336593 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.440871 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.440936 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.440955 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.440979 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.441000 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.544414 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.544512 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.544533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.544588 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.544606 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.647523 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.647583 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.647597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.647620 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.647634 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.750590 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.750660 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.750679 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.750706 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.750726 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.853304 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.853354 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.853382 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.853397 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.853406 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.955546 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.955672 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.955692 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.955717 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.955734 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:12Z","lastTransitionTime":"2026-02-18T11:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:12 crc kubenswrapper[4922]: I0218 11:38:12.975808 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:48:05.480725649 +0000 UTC Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.059006 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.059039 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.059049 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.059065 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.059076 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.162533 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.162606 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.162629 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.162656 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.162677 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.265595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.265670 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.265683 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.265699 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.265711 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.368057 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.368106 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.368114 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.368128 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.368139 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.471014 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.471085 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.471111 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.471141 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.471162 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.574085 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.574120 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.574129 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.574142 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.574152 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.677231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.677308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.677341 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.677418 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.677444 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.780484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.780552 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.780571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.780600 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.780627 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.883667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.883729 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.883745 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.883769 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.883786 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.972260 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.972347 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:13 crc kubenswrapper[4922]: E0218 11:38:13.972419 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.972397 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:13 crc kubenswrapper[4922]: E0218 11:38:13.972670 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.972723 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:13 crc kubenswrapper[4922]: E0218 11:38:13.973224 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:13 crc kubenswrapper[4922]: E0218 11:38:13.973330 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.976590 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:59:51.836766503 +0000 UTC Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.986216 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.986248 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.986256 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.986268 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:13 crc kubenswrapper[4922]: I0218 11:38:13.986281 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:13Z","lastTransitionTime":"2026-02-18T11:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.089599 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.089633 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.089644 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.089658 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.089669 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.192230 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.192274 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.192287 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.192303 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.192315 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.294231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.294270 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.294281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.294294 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.294303 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.396506 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.396570 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.396592 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.396618 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.396639 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.499951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.500005 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.500015 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.500031 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.500041 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.602545 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.602604 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.602621 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.602646 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.602664 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.705455 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.705495 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.705504 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.705517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.705528 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.807734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.807796 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.807814 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.807837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.807853 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.911243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.911286 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.911297 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.911316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.911327 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:14Z","lastTransitionTime":"2026-02-18T11:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:14 crc kubenswrapper[4922]: I0218 11:38:14.977075 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:07:56.534093459 +0000 UTC Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.014323 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.014410 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.014424 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.014442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.014454 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.117505 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.117640 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.117667 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.117697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.117720 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.220238 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.220314 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.220338 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.220404 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.220429 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.323914 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.323982 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.323998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.324041 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.324054 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.428105 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.428167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.428182 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.428203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.428219 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.532391 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.532433 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.532444 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.532459 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.532471 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.635643 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.636033 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.636048 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.636067 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.636081 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.739932 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.740004 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.740018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.740043 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.740058 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.843466 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.843544 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.843563 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.843589 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.843606 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.946092 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.946161 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.946174 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.946191 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.946222 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:15Z","lastTransitionTime":"2026-02-18T11:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.972071 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.972106 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:15 crc kubenswrapper[4922]: E0218 11:38:15.972239 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.972115 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.972106 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:15 crc kubenswrapper[4922]: E0218 11:38:15.972548 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:15 crc kubenswrapper[4922]: E0218 11:38:15.972743 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:15 crc kubenswrapper[4922]: E0218 11:38:15.972455 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:15 crc kubenswrapper[4922]: I0218 11:38:15.978204 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:47:02.301635378 +0000 UTC Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.049104 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.049169 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.049188 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.049213 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.049231 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.152308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.152412 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.152439 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.152467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.152489 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.255697 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.255752 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.255772 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.255795 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.255814 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.360209 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.360282 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.360307 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.360409 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.360446 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.419668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.419720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.419734 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.419753 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.419767 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: E0218 11:38:16.438803 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.442613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.442645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.442657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.442673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.442685 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: E0218 11:38:16.455839 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.459221 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.459252 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.459264 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.459281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.459293 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: E0218 11:38:16.471719 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.475135 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.475184 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.475198 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.475217 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.475232 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: E0218 11:38:16.485691 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.488996 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.489033 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.489044 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.489058 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.489067 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: E0218 11:38:16.503333 4922 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:38:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"142c3684-8991-4ed8-97d2-827c32777413\\\",\\\"systemUUID\\\":\\\"e6651d54-876d-4ba5-b7d2-813fd96498e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:16 crc kubenswrapper[4922]: E0218 11:38:16.503504 4922 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.505078 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.505130 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.505143 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.505162 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.505174 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.608024 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.608113 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.608133 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.608167 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.608186 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.711542 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.711627 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.711645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.711673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.711686 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.815110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.815159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.815170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.815188 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.815201 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.918243 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.918283 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.918291 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.918305 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.918314 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:16Z","lastTransitionTime":"2026-02-18T11:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:16 crc kubenswrapper[4922]: I0218 11:38:16.978704 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:25:23.818566832 +0000 UTC Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.021067 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.021254 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.021280 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.021301 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.021317 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.124051 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.124108 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.124126 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.124149 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.124167 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.227471 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.227556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.227584 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.227616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.227643 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.330345 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.330457 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.330479 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.330508 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.330527 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.433788 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.433860 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.433878 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.433899 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.433915 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.537145 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.537226 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.537251 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.537281 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.537305 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.641972 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.642054 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.642072 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.642103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.642124 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.745221 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.745295 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.745315 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.745340 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.745392 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.750120 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:17 crc kubenswrapper[4922]: E0218 11:38:17.750301 4922 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:38:17 crc kubenswrapper[4922]: E0218 11:38:17.750428 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs podName:4702cf45-b47b-4291-a553-5bfc7bc22674 nodeName:}" failed. No retries permitted until 2026-02-18 11:39:21.750401655 +0000 UTC m=+163.478105775 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs") pod "network-metrics-daemon-pspfr" (UID: "4702cf45-b47b-4291-a553-5bfc7bc22674") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.849103 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.849170 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.849192 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.849217 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.849235 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.952754 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.952839 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.952863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.952891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.952914 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:17Z","lastTransitionTime":"2026-02-18T11:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.972598 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:17 crc kubenswrapper[4922]: E0218 11:38:17.972786 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.973224 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:17 crc kubenswrapper[4922]: E0218 11:38:17.973435 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.973751 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.973763 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:17 crc kubenswrapper[4922]: E0218 11:38:17.974144 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:17 crc kubenswrapper[4922]: E0218 11:38:17.974279 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:17 crc kubenswrapper[4922]: I0218 11:38:17.978883 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:57:52.933419806 +0000 UTC Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.055479 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.055540 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.055563 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.055593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.055619 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.158534 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.158585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.158594 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.158616 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.158626 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.261789 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.261846 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.261866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.261891 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.261911 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.364562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.364618 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.364635 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.364659 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.364677 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.467400 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.467473 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.467493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.467516 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.467534 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.571405 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.571505 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.571527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.571559 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.571585 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.674884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.674961 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.674985 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.675014 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.675036 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.778231 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.778275 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.778285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.778299 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.778308 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.881219 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.881259 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.881270 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.881285 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.881297 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.979070 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:52:59.815705932 +0000 UTC Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.983817 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.983892 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.983919 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.983951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.983985 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:18Z","lastTransitionTime":"2026-02-18T11:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:18 crc kubenswrapper[4922]: I0218 11:38:18.996259 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"384d508e-8f6a-427d-a85d-85bb61d8405e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0218 11:36:58.792829 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 11:36:58.792990 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 11:36:58.794119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-56419459/tls.crt::/tmp/serving-cert-56419459/tls.key\\\\\\\"\\\\nI0218 11:36:58.994875 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 11:36:59.009974 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 11:36:59.010010 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 11:36:59.010039 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 11:36:59.010047 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 11:36:59.021309 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 11:36:59.021338 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021343 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 11:36:59.021347 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 11:36:59.021350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 11:36:59.021353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 11:36:59.021356 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 11:36:59.021636 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 11:36:59.024101 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:18Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.018551 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://454f72313609f075cd3f659b19279f8c3b07e8fea70f0ca7c88c5ca8e2e57397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0577fe2fd5192d4ec36ec8bfc6259a533b6422ef672d8153f601c2ac96bb5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.044289 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-26zbd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"592c6351-c252-4c19-b3b1-167096be2de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fca36b120b6a8f3bf08a810358bdd23b8b3c3105ede8e850232a9fef2d322db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7585194c5a1150a4dc59095db7e04747db55227d8005c7b472fcd61f6b2480e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89daf582468d65a1bb8b328ea34eca65ffc45ab10f932d8e2770b0afedcb8c1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e7189806450269de2a8b03e9776336e5b727da219f6ba797b5249205994ce00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66447f702724cd3d39c49873ee30b35df3d9a2cc1f1a7dbdcf39fc64673febc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c04f0a35c3197d7383659633eac3058f316dd2b395515e372293162567baa959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b85fcbd7611f3c38d84b9c892b62a9677e205d9f690e6cd2057dab3276e296\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55zqx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-26zbd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.059436 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648f85d5-dbc6-4db6-b590-3edc96740212\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e419e9c268c54b7829fa957e360c5b2c9943232871f9773ab32017c36a44b79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61f493c42d7223ae48d738cf9973c2f6f5997722b017cc1665ec5f27ea0542dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8l2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hjnxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.072826 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w46bt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067f44ac-9e60-4581-87cc-f2e1c823fc4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4c52fc2f1b105e90957f8cd7424352990f9ec9e5350d1a501980bfda1613066\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5ks8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w46bt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.087720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.087778 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.087832 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.087855 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.087872 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.088983 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pspfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4702cf45-b47b-4291-a553-5bfc7bc22674\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7224x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:37:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pspfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.101087 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc6a195-a6bd-42d8-990f-60b614c51413\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc50b682470a4987df420a64c6ec491b7137229551303ea861e8c4c037cba371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32c42a81cb689c2599fc99a29686dcfa4beb5434da7149bbe8ca19d545a579bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c42a81cb689c2599fc99a29686dcfa4beb5434da7149bbe8ca19d545a579bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.122150 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e47a2352-be16-435d-9207-078c64f99e77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d641439c1ae42b2579e47041fc257fa37643ec1ef4fcacf55b15b82d6ec02cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac316efde113b3547d00f480ee2c658958ba4f99c719d0920f333ab2a62ebaf6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6a8f8152ee7536b35054611238f4050c7f0e3a756f74ae8525af7f50150f88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.140987 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"143e32d8-7a32-4ab1-9790-a5b4fe1f0180\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c70b47d4c3df2edbdf1c33bdcff5642932d5212a23d54b20e79ec909b2e3de12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75e27a0c7d36ad7d1c3f884b9bb69b963a922ad7851d263b3761ed5bc7e79a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a488435a081bdf17035a1ff4df73b0723045f0eef9049ec259198c697ecc33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39ce7973e25ba416a5f82cbafb8c1b3bea34718c49f8a8a56bd46970dc688727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.159581 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.176058 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.190732 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.190770 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.190784 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.190800 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.190838 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.199808 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.238201 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbe0881c-3173-4207-adb8-72c0dd4fa9c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e73616098bbc0fcdcd0b7e6b5bf610a7c193bdb65922e4532ce9553e9fc6e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a71f2a53a03eafd89329b955a7edbe71c610706392e0a07fa09b9a6247889913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac665b2d5a707e29339e7e8afa75dffcfd1dc1f42262e2643b2487ae68558a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0517235f78003f9b1a9044287c23aebed811299dc48f24c6643dd3bfcf5cf1d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77a0c8c34dd5237a5b2d52b436f5c4deabd96a96ccb8073961dc49aaffd62c8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:36:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35c0c44a6ef5852d738ae2b528cfb201946f92b8c20d202c41b25be58aed7a4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5499f94aab1c37157b868db9ca72c519a84e0972ed310a60db2ea52a9148ddbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62432fda808af44d25af78beb104a775a1f211f1456d7112897af762cf72914f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:36:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:36:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.255820 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65825dc8bc12fe9933ff1392fd17a19042bf62d65c0b1ba22303094881745e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.272989 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c9xzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b4595ac-c521-4ada-950d-e1b01cdff99b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:47Z\\\",\\\"message\\\":\\\"2026-02-18T11:37:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f\\\\n2026-02-18T11:37:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_66f40f5e-a7a5-411c-a907-9573d8b5508f to /host/opt/cni/bin/\\\\n2026-02-18T11:37:02Z [verbose] multus-daemon started\\\\n2026-02-18T11:37:02Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:37:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zv6rh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c9xzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.289791 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3079844ef727cc58eb6253c7eaa0289ca84977a00334a10cdbf493b16133a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.294080 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.294140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.294157 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.294178 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.294193 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.303068 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdb7cedc-b2e3-48f0-80e0-e17073b43228\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29cb76dded770e8691cd270c1c1197ffd3764c3fa73faf7f072a63c9e3e4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqgzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-znglx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.328380 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"653a41bb-bb1d-421c-a92b-7f2811d95edf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:37:51Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:141\\\\nI0218 11:37:51.742382 6997 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.742628 6997 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.742905 6997 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.743039 6997 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:37:51.743155 6997 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:37:51.744128 6997 factory.go:656] Stopping watch factory\\\\nI0218 11:37:51.758771 6997 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0218 11:37:51.758806 6997 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0218 11:37:51.758875 6997 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:37:51.758908 6997 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:37:51.759034 6997 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:37:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26p2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg4r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.342182 4922 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5qkb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd3723d-a12f-4c7c-a1ea-63bfef3c931a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:37:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61ff8efc7bd7cb5045841f85767cea766898289dab6f4081829d530fbaaf9dcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:37:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c45cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:36:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5qkb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:38:19Z is after 2025-08-24T17:21:41Z" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.397549 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.397602 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.397620 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.397644 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.397662 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.500526 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.500597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.500637 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.500673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.500697 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.603495 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.603593 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.603621 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.603655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.603681 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.706030 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.706081 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.706093 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.706110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.706122 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.809284 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.809432 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.809467 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.809494 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.809513 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.914877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.914936 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.914954 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.914978 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.914995 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:19Z","lastTransitionTime":"2026-02-18T11:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.973152 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.973269 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.973194 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.973152 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:19 crc kubenswrapper[4922]: E0218 11:38:19.973444 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:19 crc kubenswrapper[4922]: E0218 11:38:19.973694 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:19 crc kubenswrapper[4922]: E0218 11:38:19.973891 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:19 crc kubenswrapper[4922]: E0218 11:38:19.974000 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:19 crc kubenswrapper[4922]: I0218 11:38:19.979574 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:29:27.591157044 +0000 UTC Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.018269 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.018316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.018333 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.018355 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.018396 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.121496 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.121565 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.121584 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.121611 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.121630 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.224962 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.225060 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.225080 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.225141 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.225158 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.327320 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.327442 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.327472 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.327504 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.327526 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.430562 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.430612 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.430622 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.430639 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.430648 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.534595 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.534680 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.534707 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.534739 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.534758 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.637656 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.637736 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.637756 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.637788 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.637807 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.740481 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.740554 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.740571 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.740591 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.740605 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.843517 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.843645 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.843657 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.843674 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.843684 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.947074 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.947144 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.947159 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.947179 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.947195 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:20Z","lastTransitionTime":"2026-02-18T11:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:20 crc kubenswrapper[4922]: I0218 11:38:20.979889 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:54:06.706686542 +0000 UTC Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.049440 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.049478 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.049489 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.049504 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.049515 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.152727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.152853 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.152877 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.152904 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.152921 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.256898 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.256979 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.256998 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.257022 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.257041 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.359793 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.359826 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.359835 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.359847 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.359855 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.463568 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.463628 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.463644 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.463668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.463686 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.567063 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.567119 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.567133 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.567155 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.567177 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.670143 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.670203 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.670216 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.670232 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.670245 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.773559 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.773591 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.773607 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.773625 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.773637 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.876308 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.876339 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.876348 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.876470 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.876515 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.973153 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.973268 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.973277 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:21 crc kubenswrapper[4922]: E0218 11:38:21.973414 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:21 crc kubenswrapper[4922]: E0218 11:38:21.973471 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.973542 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:21 crc kubenswrapper[4922]: E0218 11:38:21.973633 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:21 crc kubenswrapper[4922]: E0218 11:38:21.973839 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.979798 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.979872 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.979895 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.979918 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.979935 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:21Z","lastTransitionTime":"2026-02-18T11:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:21 crc kubenswrapper[4922]: I0218 11:38:21.980044 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:32:27.736818719 +0000 UTC Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.082842 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.082893 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.082905 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.082922 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.082936 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.186594 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.186650 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.186661 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.186677 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.186689 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.289486 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.289541 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.289557 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.289575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.289586 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.392575 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.392651 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.392668 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.392694 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.392765 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.495823 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.495887 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.495908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.495931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.495948 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.599787 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.599830 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.599839 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.599858 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.599868 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.703000 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.703047 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.703056 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.703071 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.703080 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.805720 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.805777 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.805792 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.805813 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.805826 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.908721 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.908786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.908811 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.908837 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.908858 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:22Z","lastTransitionTime":"2026-02-18T11:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.973251 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:38:22 crc kubenswrapper[4922]: E0218 11:38:22.973508 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wg4r5_openshift-ovn-kubernetes(653a41bb-bb1d-421c-a92b-7f2811d95edf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" Feb 18 11:38:22 crc kubenswrapper[4922]: I0218 11:38:22.981204 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:38:57.273600198 +0000 UTC Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.011999 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.012048 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.012062 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.012083 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.012096 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.114346 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.114401 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.114411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.114428 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.114441 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.217380 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.217714 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.217828 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.217937 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.218026 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.320762 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.320843 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.320866 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.320895 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.320920 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.423514 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.423807 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.423889 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.423958 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.424023 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.527689 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.527758 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.527776 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.527799 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.527816 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.630472 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.630556 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.630579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.631018 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.631280 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.734638 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.734687 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.734706 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.734727 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.734747 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.837786 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.837834 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.837854 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.837876 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.837895 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.941182 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.941273 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.941292 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.941316 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.941333 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:23Z","lastTransitionTime":"2026-02-18T11:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.972982 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.973110 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:23 crc kubenswrapper[4922]: E0218 11:38:23.973168 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.973312 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.973346 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:23 crc kubenswrapper[4922]: E0218 11:38:23.973551 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:23 crc kubenswrapper[4922]: E0218 11:38:23.973720 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:23 crc kubenswrapper[4922]: E0218 11:38:23.973869 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:23 crc kubenswrapper[4922]: I0218 11:38:23.981595 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:59:43.393021193 +0000 UTC Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.043539 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.043585 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.043597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.043610 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.043620 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.146886 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.146927 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.146936 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.146952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.146963 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.250532 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.250636 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.250655 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.250685 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.250706 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.353541 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.353586 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.353597 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.353613 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.353626 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.457202 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.457259 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.457271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.457292 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.457328 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.561663 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.562137 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.562214 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.562287 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.562353 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.665936 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.666034 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.666064 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.666099 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.666124 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.769458 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.769514 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.769527 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.769546 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.769558 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.873419 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.873469 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.873480 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.873497 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.873513 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.976908 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.976951 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.976963 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.976980 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.976990 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:24Z","lastTransitionTime":"2026-02-18T11:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:24 crc kubenswrapper[4922]: I0218 11:38:24.982478 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:16:49.602918227 +0000 UTC Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.079911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.079987 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.080008 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.080035 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.080053 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.182582 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.182642 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.182656 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.182673 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.182685 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.286484 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.286880 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.287028 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.287162 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.287332 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.390779 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.390865 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.390902 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.390933 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.390955 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.493411 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.493522 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.493548 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.493579 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.493600 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.596601 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.596649 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.596662 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.596680 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.596690 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.699952 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.700009 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.700027 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.700049 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.700067 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.803332 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.803449 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.803479 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.803511 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.803532 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.907160 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.907225 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.907247 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.907271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.907291 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:25Z","lastTransitionTime":"2026-02-18T11:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.973138 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.973138 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.973846 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:25 crc kubenswrapper[4922]: E0218 11:38:25.974093 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:25 crc kubenswrapper[4922]: E0218 11:38:25.974247 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:25 crc kubenswrapper[4922]: E0218 11:38:25.974615 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.974813 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:25 crc kubenswrapper[4922]: E0218 11:38:25.975111 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:25 crc kubenswrapper[4922]: I0218 11:38:25.983474 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:18:39.844377495 +0000 UTC Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.011140 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.011717 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.011884 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.012052 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.012198 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.115661 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.116507 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.116674 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.116828 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.117157 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.220931 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.220989 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.221007 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.221037 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.221059 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.324215 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.324605 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.324872 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.325110 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.325307 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.429055 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.429408 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.429520 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.429626 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.429790 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.532863 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.532911 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.532923 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.532944 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.532957 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.635171 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.635248 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.635271 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.635298 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.635317 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.738093 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.738180 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.738211 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.738241 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.738263 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.799904 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.799969 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.799984 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.800001 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.800014 4922 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:38:26Z","lastTransitionTime":"2026-02-18T11:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.859924 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j"] Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.860651 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.863456 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.863692 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.863780 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.863857 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.911154 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-w46bt" podStartSLOduration=88.911122828 podStartE2EDuration="1m28.911122828s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:26.897324754 +0000 UTC m=+108.625028874" watchObservedRunningTime="2026-02-18 11:38:26.911122828 +0000 UTC m=+108.638826948" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.943131 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.943100091 podStartE2EDuration="15.943100091s" podCreationTimestamp="2026-02-18 11:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:26.926511881 +0000 UTC m=+108.654215991" watchObservedRunningTime="2026-02-18 11:38:26.943100091 +0000 UTC m=+108.670804201" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.957081 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.957341 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.957472 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.957573 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.957684 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.958857 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=81.958832512 podStartE2EDuration="1m21.958832512s" podCreationTimestamp="2026-02-18 11:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:26.944072164 +0000 UTC m=+108.671776284" watchObservedRunningTime="2026-02-18 11:38:26.958832512 +0000 UTC m=+108.686536622" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.959288 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.959275012 podStartE2EDuration="1m0.959275012s" podCreationTimestamp="2026-02-18 11:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:26.958093195 +0000 UTC m=+108.685797305" watchObservedRunningTime="2026-02-18 11:38:26.959275012 +0000 UTC m=+108.686979152" Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.984214 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:19:54.691378788 +0000 UTC Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.984305 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 11:38:26 crc kubenswrapper[4922]: I0218 11:38:26.995243 4922 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.039275 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.039256517 podStartE2EDuration="1m28.039256517s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:27.038443787 +0000 UTC m=+108.766147897" watchObservedRunningTime="2026-02-18 11:38:27.039256517 +0000 UTC m=+108.766960617" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.058544 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.058616 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.058642 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.058671 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.058695 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.058690 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.058787 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.059866 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.071155 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.080841 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2f5e320-c4e9-42c9-8af5-f528cc87ffba-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7b8j\" (UID: \"a2f5e320-c4e9-42c9-8af5-f528cc87ffba\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.086156 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-c9xzd" podStartSLOduration=89.086143491 podStartE2EDuration="1m29.086143491s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:27.085392473 +0000 UTC m=+108.813096563" watchObservedRunningTime="2026-02-18 11:38:27.086143491 +0000 UTC m=+108.813847571" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.117895 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podStartSLOduration=89.117872709 podStartE2EDuration="1m29.117872709s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:27.117637393 +0000 UTC m=+108.845341473" watchObservedRunningTime="2026-02-18 11:38:27.117872709 +0000 UTC m=+108.845576809" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.131137 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-q5qkb" podStartSLOduration=89.131117761 podStartE2EDuration="1m29.131117761s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:27.130940496 +0000 UTC m=+108.858644596" watchObservedRunningTime="2026-02-18 11:38:27.131117761 +0000 UTC m=+108.858821841" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.149438 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.149418082 podStartE2EDuration="1m28.149418082s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:27.147788533 +0000 UTC m=+108.875492623" watchObservedRunningTime="2026-02-18 11:38:27.149418082 +0000 UTC m=+108.877122182" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.179639 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.197172 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-26zbd" podStartSLOduration=89.197150136 podStartE2EDuration="1m29.197150136s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:27.183917584 +0000 UTC m=+108.911621674" watchObservedRunningTime="2026-02-18 11:38:27.197150136 +0000 UTC m=+108.924854226" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.221146 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hjnxs" podStartSLOduration=88.221128621 podStartE2EDuration="1m28.221128621s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:27.200213988 +0000 UTC m=+108.927918078" watchObservedRunningTime="2026-02-18 11:38:27.221128621 +0000 UTC m=+108.948832701" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.627899 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" event={"ID":"a2f5e320-c4e9-42c9-8af5-f528cc87ffba","Type":"ContainerStarted","Data":"3a203f7facc64ff381081a49c17212041c7856393c8abc131e8c003f76c085fd"} Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.627948 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" event={"ID":"a2f5e320-c4e9-42c9-8af5-f528cc87ffba","Type":"ContainerStarted","Data":"4d0319557160db52cd2b9a747597dc410c4d850ddac4b4c0e255b8afb6e4e603"} Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.973014 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.973128 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.973141 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:27 crc kubenswrapper[4922]: I0218 11:38:27.973014 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:27 crc kubenswrapper[4922]: E0218 11:38:27.973279 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:27 crc kubenswrapper[4922]: E0218 11:38:27.973436 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:27 crc kubenswrapper[4922]: E0218 11:38:27.973594 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:27 crc kubenswrapper[4922]: E0218 11:38:27.973835 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:29 crc kubenswrapper[4922]: I0218 11:38:29.972795 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:29 crc kubenswrapper[4922]: I0218 11:38:29.972835 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:29 crc kubenswrapper[4922]: I0218 11:38:29.972861 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:29 crc kubenswrapper[4922]: E0218 11:38:29.972999 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:29 crc kubenswrapper[4922]: E0218 11:38:29.973224 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:29 crc kubenswrapper[4922]: E0218 11:38:29.973323 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:29 crc kubenswrapper[4922]: I0218 11:38:29.973839 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:29 crc kubenswrapper[4922]: E0218 11:38:29.974041 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:31 crc kubenswrapper[4922]: I0218 11:38:31.972100 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:31 crc kubenswrapper[4922]: I0218 11:38:31.972146 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:31 crc kubenswrapper[4922]: E0218 11:38:31.972274 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:31 crc kubenswrapper[4922]: I0218 11:38:31.972544 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:31 crc kubenswrapper[4922]: I0218 11:38:31.972601 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:31 crc kubenswrapper[4922]: E0218 11:38:31.972717 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:31 crc kubenswrapper[4922]: E0218 11:38:31.973066 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:31 crc kubenswrapper[4922]: E0218 11:38:31.973277 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:33 crc kubenswrapper[4922]: I0218 11:38:33.972537 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:33 crc kubenswrapper[4922]: I0218 11:38:33.972676 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:33 crc kubenswrapper[4922]: I0218 11:38:33.972729 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:33 crc kubenswrapper[4922]: I0218 11:38:33.972753 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:33 crc kubenswrapper[4922]: E0218 11:38:33.972689 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:33 crc kubenswrapper[4922]: E0218 11:38:33.972914 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:33 crc kubenswrapper[4922]: E0218 11:38:33.973023 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:33 crc kubenswrapper[4922]: E0218 11:38:33.973184 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:34 crc kubenswrapper[4922]: I0218 11:38:34.653661 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/1.log" Feb 18 11:38:34 crc kubenswrapper[4922]: I0218 11:38:34.654535 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/0.log" Feb 18 11:38:34 crc kubenswrapper[4922]: I0218 11:38:34.654607 4922 generic.go:334] "Generic (PLEG): container finished" podID="9b4595ac-c521-4ada-950d-e1b01cdff99b" containerID="71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765" exitCode=1 Feb 18 11:38:34 crc kubenswrapper[4922]: I0218 11:38:34.654654 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c9xzd" event={"ID":"9b4595ac-c521-4ada-950d-e1b01cdff99b","Type":"ContainerDied","Data":"71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765"} Feb 18 11:38:34 crc kubenswrapper[4922]: I0218 11:38:34.654693 4922 scope.go:117] "RemoveContainer" containerID="83f784f6e4f98fea5928c944062388eaa69d5a5782490bdb120cd735f54f52df" Feb 18 11:38:34 crc kubenswrapper[4922]: I0218 11:38:34.655210 4922 scope.go:117] "RemoveContainer" containerID="71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765" Feb 18 11:38:34 crc kubenswrapper[4922]: E0218 11:38:34.655412 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-c9xzd_openshift-multus(9b4595ac-c521-4ada-950d-e1b01cdff99b)\"" pod="openshift-multus/multus-c9xzd" podUID="9b4595ac-c521-4ada-950d-e1b01cdff99b" Feb 18 11:38:34 crc kubenswrapper[4922]: I0218 11:38:34.688865 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7b8j" podStartSLOduration=96.688850375 podStartE2EDuration="1m36.688850375s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:27.647016753 +0000 UTC m=+109.374720833" watchObservedRunningTime="2026-02-18 11:38:34.688850375 +0000 UTC m=+116.416554455" Feb 18 11:38:35 crc kubenswrapper[4922]: I0218 11:38:35.660544 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/1.log" Feb 18 11:38:35 crc kubenswrapper[4922]: I0218 11:38:35.973287 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:35 crc kubenswrapper[4922]: I0218 11:38:35.973316 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:35 crc kubenswrapper[4922]: E0218 11:38:35.974181 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:35 crc kubenswrapper[4922]: I0218 11:38:35.973509 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:35 crc kubenswrapper[4922]: I0218 11:38:35.972031 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:35 crc kubenswrapper[4922]: E0218 11:38:35.974870 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:35 crc kubenswrapper[4922]: E0218 11:38:35.974892 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:35 crc kubenswrapper[4922]: E0218 11:38:35.974975 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:37 crc kubenswrapper[4922]: I0218 11:38:37.972908 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:37 crc kubenswrapper[4922]: I0218 11:38:37.972922 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:37 crc kubenswrapper[4922]: I0218 11:38:37.972922 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:37 crc kubenswrapper[4922]: I0218 11:38:37.972970 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:37 crc kubenswrapper[4922]: E0218 11:38:37.973341 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:37 crc kubenswrapper[4922]: E0218 11:38:37.973578 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:37 crc kubenswrapper[4922]: E0218 11:38:37.973777 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:37 crc kubenswrapper[4922]: E0218 11:38:37.974295 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:37 crc kubenswrapper[4922]: I0218 11:38:37.975289 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:38:38 crc kubenswrapper[4922]: I0218 11:38:38.673578 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/3.log" Feb 18 11:38:38 crc kubenswrapper[4922]: I0218 11:38:38.676275 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerStarted","Data":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} Feb 18 11:38:38 crc kubenswrapper[4922]: I0218 11:38:38.676728 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:38:38 crc kubenswrapper[4922]: I0218 11:38:38.706506 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podStartSLOduration=100.706487426 podStartE2EDuration="1m40.706487426s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:38:38.705650706 +0000 UTC m=+120.433354826" watchObservedRunningTime="2026-02-18 11:38:38.706487426 +0000 UTC m=+120.434191526" Feb 18 11:38:38 crc kubenswrapper[4922]: I0218 11:38:38.905091 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pspfr"] Feb 18 11:38:38 crc kubenswrapper[4922]: I0218 11:38:38.905245 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:38 crc kubenswrapper[4922]: E0218 11:38:38.905403 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:38 crc kubenswrapper[4922]: E0218 11:38:38.957325 4922 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 11:38:39 crc kubenswrapper[4922]: E0218 11:38:39.066551 4922 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 11:38:39 crc kubenswrapper[4922]: I0218 11:38:39.973208 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:39 crc kubenswrapper[4922]: I0218 11:38:39.973830 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:39 crc kubenswrapper[4922]: E0218 11:38:39.973906 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:39 crc kubenswrapper[4922]: I0218 11:38:39.973932 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:39 crc kubenswrapper[4922]: E0218 11:38:39.974053 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:39 crc kubenswrapper[4922]: E0218 11:38:39.974166 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:40 crc kubenswrapper[4922]: I0218 11:38:40.972023 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:40 crc kubenswrapper[4922]: E0218 11:38:40.972293 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:41 crc kubenswrapper[4922]: I0218 11:38:41.972215 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:41 crc kubenswrapper[4922]: I0218 11:38:41.972318 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:41 crc kubenswrapper[4922]: E0218 11:38:41.972483 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:41 crc kubenswrapper[4922]: E0218 11:38:41.972647 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:41 crc kubenswrapper[4922]: I0218 11:38:41.973809 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:41 crc kubenswrapper[4922]: E0218 11:38:41.974139 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:42 crc kubenswrapper[4922]: I0218 11:38:42.972599 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:42 crc kubenswrapper[4922]: E0218 11:38:42.972795 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:43 crc kubenswrapper[4922]: I0218 11:38:43.972019 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:43 crc kubenswrapper[4922]: I0218 11:38:43.972156 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:43 crc kubenswrapper[4922]: E0218 11:38:43.972242 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:43 crc kubenswrapper[4922]: I0218 11:38:43.972278 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:43 crc kubenswrapper[4922]: E0218 11:38:43.972501 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:43 crc kubenswrapper[4922]: E0218 11:38:43.972643 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:44 crc kubenswrapper[4922]: E0218 11:38:44.068208 4922 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 11:38:44 crc kubenswrapper[4922]: I0218 11:38:44.972782 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:44 crc kubenswrapper[4922]: E0218 11:38:44.973065 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:45 crc kubenswrapper[4922]: I0218 11:38:45.972733 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:45 crc kubenswrapper[4922]: I0218 11:38:45.972836 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:45 crc kubenswrapper[4922]: E0218 11:38:45.972957 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:45 crc kubenswrapper[4922]: I0218 11:38:45.972983 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:45 crc kubenswrapper[4922]: E0218 11:38:45.973113 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:45 crc kubenswrapper[4922]: E0218 11:38:45.973193 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:46 crc kubenswrapper[4922]: I0218 11:38:46.972769 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:46 crc kubenswrapper[4922]: E0218 11:38:46.972937 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:47 crc kubenswrapper[4922]: I0218 11:38:47.972441 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:47 crc kubenswrapper[4922]: I0218 11:38:47.972491 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:47 crc kubenswrapper[4922]: I0218 11:38:47.972531 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:47 crc kubenswrapper[4922]: E0218 11:38:47.972679 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:47 crc kubenswrapper[4922]: E0218 11:38:47.972843 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:47 crc kubenswrapper[4922]: E0218 11:38:47.973324 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:48 crc kubenswrapper[4922]: I0218 11:38:48.973741 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:48 crc kubenswrapper[4922]: E0218 11:38:48.975789 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:49 crc kubenswrapper[4922]: E0218 11:38:49.069430 4922 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 11:38:49 crc kubenswrapper[4922]: I0218 11:38:49.972933 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:49 crc kubenswrapper[4922]: I0218 11:38:49.972978 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:49 crc kubenswrapper[4922]: I0218 11:38:49.973024 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:49 crc kubenswrapper[4922]: E0218 11:38:49.974060 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:49 crc kubenswrapper[4922]: E0218 11:38:49.974148 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:49 crc kubenswrapper[4922]: I0218 11:38:49.974193 4922 scope.go:117] "RemoveContainer" containerID="71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765" Feb 18 11:38:49 crc kubenswrapper[4922]: E0218 11:38:49.974227 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:50 crc kubenswrapper[4922]: I0218 11:38:50.722560 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/1.log" Feb 18 11:38:50 crc kubenswrapper[4922]: I0218 11:38:50.722659 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c9xzd" event={"ID":"9b4595ac-c521-4ada-950d-e1b01cdff99b","Type":"ContainerStarted","Data":"7270cd2a97c0ebd334ca9515a9ea50d1714e52da4915c1ade982166b94d940c9"} Feb 18 11:38:50 crc kubenswrapper[4922]: I0218 11:38:50.972718 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:50 crc kubenswrapper[4922]: E0218 11:38:50.972888 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:51 crc kubenswrapper[4922]: I0218 11:38:51.972799 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:51 crc kubenswrapper[4922]: I0218 11:38:51.972848 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:51 crc kubenswrapper[4922]: I0218 11:38:51.972799 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:51 crc kubenswrapper[4922]: E0218 11:38:51.973027 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:51 crc kubenswrapper[4922]: E0218 11:38:51.973157 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:51 crc kubenswrapper[4922]: E0218 11:38:51.973334 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:52 crc kubenswrapper[4922]: I0218 11:38:52.972754 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:52 crc kubenswrapper[4922]: E0218 11:38:52.973046 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pspfr" podUID="4702cf45-b47b-4291-a553-5bfc7bc22674" Feb 18 11:38:53 crc kubenswrapper[4922]: I0218 11:38:53.972864 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:53 crc kubenswrapper[4922]: I0218 11:38:53.972912 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:53 crc kubenswrapper[4922]: I0218 11:38:53.972873 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:53 crc kubenswrapper[4922]: E0218 11:38:53.973088 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:38:53 crc kubenswrapper[4922]: E0218 11:38:53.973208 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:38:53 crc kubenswrapper[4922]: E0218 11:38:53.973507 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:38:54 crc kubenswrapper[4922]: I0218 11:38:54.972147 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:38:54 crc kubenswrapper[4922]: I0218 11:38:54.975353 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 11:38:54 crc kubenswrapper[4922]: I0218 11:38:54.975664 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 11:38:55 crc kubenswrapper[4922]: I0218 11:38:55.320792 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:38:55 crc kubenswrapper[4922]: I0218 11:38:55.972278 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:38:55 crc kubenswrapper[4922]: I0218 11:38:55.972422 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:38:55 crc kubenswrapper[4922]: I0218 11:38:55.972452 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:38:55 crc kubenswrapper[4922]: I0218 11:38:55.974753 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 11:38:55 crc kubenswrapper[4922]: I0218 11:38:55.975814 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 11:38:55 crc kubenswrapper[4922]: I0218 11:38:55.976414 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 11:38:55 crc kubenswrapper[4922]: I0218 11:38:55.976728 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.322493 4922 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.378292 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ks48g"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.379659 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.380636 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.382654 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.382830 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5sz92"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.383902 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.385033 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.385737 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.389447 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78x9f"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.390156 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.390463 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.390850 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.392230 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pj7qx"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.393096 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.393238 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.393447 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.396460 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.399419 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.400419 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410142 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-etcd-serving-ca\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410198 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-encryption-config\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410240 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-serving-cert\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410272 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-etcd-client\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410315 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a5c3121-2765-47df-aa3f-22595e4b4ea9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410340 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5c3121-2765-47df-aa3f-22595e4b4ea9-config\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410382 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-audit\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410427 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8719fb44-5fea-4fd5-a516-5d2ab11c221c-node-pullsecrets\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410453 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-config\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410481 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a5c3121-2765-47df-aa3f-22595e4b4ea9-images\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410507 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl6wl\" (UniqueName: \"kubernetes.io/projected/bb42c973-5e2c-4650-b259-e882429363c7-kube-api-access-wl6wl\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410549 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb42c973-5e2c-4650-b259-e882429363c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410596 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-image-import-ca\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410652 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/93f5445e-7408-4d36-aa4c-a7461f94d75a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jkmhc\" (UID: \"93f5445e-7408-4d36-aa4c-a7461f94d75a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410737 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410778 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8719fb44-5fea-4fd5-a516-5d2ab11c221c-audit-dir\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410830 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb42c973-5e2c-4650-b259-e882429363c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410864 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xntp\" (UniqueName: \"kubernetes.io/projected/8719fb44-5fea-4fd5-a516-5d2ab11c221c-kube-api-access-4xntp\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410897 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7w4c\" (UniqueName: \"kubernetes.io/projected/93f5445e-7408-4d36-aa4c-a7461f94d75a-kube-api-access-c7w4c\") pod \"cluster-samples-operator-665b6dd947-jkmhc\" (UID: \"93f5445e-7408-4d36-aa4c-a7461f94d75a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.410935 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csh8z\" (UniqueName: \"kubernetes.io/projected/4a5c3121-2765-47df-aa3f-22595e4b4ea9-kube-api-access-csh8z\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.415825 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.416874 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.417299 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.417765 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.418284 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.418525 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.422880 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.423026 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.423202 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.423253 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.423686 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.426464 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sddqb"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.427757 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.433392 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.433717 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.433902 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.434137 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.434351 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.434625 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.434816 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.436067 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.441286 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7mwg"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.449781 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.451743 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.452121 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.452278 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.454269 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.454492 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.454775 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.455065 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.455086 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.455218 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.455334 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.455644 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.455839 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.456074 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.457801 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-b6dxx"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.458565 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-b6dxx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.460919 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.461274 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.461633 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.461880 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.461929 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.462133 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.462976 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-nfn89"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.463605 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.467708 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.469504 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.469954 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.471207 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.472081 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.474530 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7twg2"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.475615 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.476672 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.477939 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.490453 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491201 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491318 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491417 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491503 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491586 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491673 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491800 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491853 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.491938 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.492093 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.492182 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.492328 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.492868 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.493920 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.494343 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.494476 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.494561 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.494637 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.492101 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.494936 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.495078 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.495152 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.494945 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.495203 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.495158 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.495334 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.495514 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.495728 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.495968 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.496099 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.496607 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.497045 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-prk5g"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.498820 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.501308 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.501575 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.525426 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.531979 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.532076 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536261 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536300 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8719fb44-5fea-4fd5-a516-5d2ab11c221c-audit-dir\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536346 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536390 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afb43c7e-87bc-4450-ad81-6a22161fb794-machine-approver-tls\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536431 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb42c973-5e2c-4650-b259-e882429363c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xntp\" (UniqueName: \"kubernetes.io/projected/8719fb44-5fea-4fd5-a516-5d2ab11c221c-kube-api-access-4xntp\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536484 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhx5p\" (UniqueName: \"kubernetes.io/projected/48dabf7e-d1d7-48b6-bc70-5cc88cdcf994-kube-api-access-jhx5p\") pod \"downloads-7954f5f757-b6dxx\" (UID: \"48dabf7e-d1d7-48b6-bc70-5cc88cdcf994\") " pod="openshift-console/downloads-7954f5f757-b6dxx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536513 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-client-ca\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536537 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lkpw\" (UniqueName: \"kubernetes.io/projected/c3bb2fc9-822c-4f53-98bf-70933744cf7f-kube-api-access-6lkpw\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536558 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536579 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-serving-cert\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536596 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536618 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7w4c\" (UniqueName: \"kubernetes.io/projected/93f5445e-7408-4d36-aa4c-a7461f94d75a-kube-api-access-c7w4c\") pod \"cluster-samples-operator-665b6dd947-jkmhc\" (UID: \"93f5445e-7408-4d36-aa4c-a7461f94d75a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536640 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536661 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpknq\" (UniqueName: \"kubernetes.io/projected/9e36551d-13cd-4a75-a29b-658850b46cb8-kube-api-access-mpknq\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536679 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-oauth-serving-cert\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536697 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csh8z\" (UniqueName: \"kubernetes.io/projected/4a5c3121-2765-47df-aa3f-22595e4b4ea9-kube-api-access-csh8z\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536725 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536754 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-service-ca\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536780 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-etcd-serving-ca\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536823 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-encryption-config\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536845 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jfkj\" (UniqueName: \"kubernetes.io/projected/9951c815-3e1f-40ad-8597-b558366ffc58-kube-api-access-6jfkj\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536867 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-etcd-client\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536884 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-serving-cert\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536905 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536940 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a5c3121-2765-47df-aa3f-22595e4b4ea9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536959 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4ea70ef-743e-44ef-804c-2f1321999baa-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.536979 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lpr\" (UniqueName: \"kubernetes.io/projected/f4ea70ef-743e-44ef-804c-2f1321999baa-kube-api-access-t6lpr\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537002 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9951c815-3e1f-40ad-8597-b558366ffc58-serving-cert\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537219 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-config\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537239 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-trusted-ca-bundle\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537257 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537280 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5c3121-2765-47df-aa3f-22595e4b4ea9-config\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537303 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-client-ca\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537327 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537349 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbvn\" (UniqueName: \"kubernetes.io/projected/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-kube-api-access-sfbvn\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537391 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhj97\" (UniqueName: \"kubernetes.io/projected/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-kube-api-access-dhj97\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537415 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-config\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537434 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537454 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k46td\" (UniqueName: \"kubernetes.io/projected/158f0672-c017-4e45-a564-96de81f21772-kube-api-access-k46td\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537475 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-audit\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537497 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-config\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537516 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537539 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537547 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb43c7e-87bc-4450-ad81-6a22161fb794-config\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537709 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-config\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537738 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bb2fc9-822c-4f53-98bf-70933744cf7f-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537765 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-dir\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537786 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8719fb44-5fea-4fd5-a516-5d2ab11c221c-node-pullsecrets\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537830 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a5c3121-2765-47df-aa3f-22595e4b4ea9-images\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537850 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-config\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537869 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6m8\" (UniqueName: \"kubernetes.io/projected/afb43c7e-87bc-4450-ad81-6a22161fb794-kube-api-access-cz6m8\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537894 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl6wl\" (UniqueName: \"kubernetes.io/projected/bb42c973-5e2c-4650-b259-e882429363c7-kube-api-access-wl6wl\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537895 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8719fb44-5fea-4fd5-a516-5d2ab11c221c-audit-dir\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.537915 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slf52\" (UniqueName: \"kubernetes.io/projected/14e81dbf-6c73-481c-b758-4c15cc0f3258-kube-api-access-slf52\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.538015 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8719fb44-5fea-4fd5-a516-5d2ab11c221c-node-pullsecrets\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.538089 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wt2rf"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.538728 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb42c973-5e2c-4650-b259-e882429363c7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.538783 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.538845 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.539003 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-config\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.539044 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-oauth-config\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.542840 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-config\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.561744 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.562113 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.562995 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.563510 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ea70ef-743e-44ef-804c-2f1321999baa-serving-cert\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.564061 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.564143 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a5c3121-2765-47df-aa3f-22595e4b4ea9-images\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.568177 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wdprh"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.569122 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a5c3121-2765-47df-aa3f-22595e4b4ea9-config\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.569681 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rwxzh"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.569977 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb42c973-5e2c-4650-b259-e882429363c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.570521 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.570953 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.571171 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.571569 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.572245 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.572480 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.572665 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.572966 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.573095 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.573259 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.573280 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-audit\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.573676 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.573819 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.573951 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.574091 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.574235 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.574348 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.574483 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.574649 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-etcd-serving-ca\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.574894 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.574941 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.575447 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.575587 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.576824 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ks48g"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.576912 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.577659 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.577768 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5sz92"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.578580 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a5c3121-2765-47df-aa3f-22595e4b4ea9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.578806 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.580128 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ktkz9"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.580501 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.580560 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.580868 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.581222 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.581395 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-etcd-client\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.581824 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-encryption-config\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.583290 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.584187 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.584754 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.585461 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.585630 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8719fb44-5fea-4fd5-a516-5d2ab11c221c-serving-cert\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.585865 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.585942 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.586345 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.586436 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.586454 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.587131 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.587161 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.587941 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.590452 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.590551 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.592392 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-serving-cert\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.592469 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afb43c7e-87bc-4450-ad81-6a22161fb794-auth-proxy-config\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.592557 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.592685 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.592841 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.592913 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-service-ca-bundle\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.593053 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.593629 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-trusted-ca\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.593808 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-policies\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.593837 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.593863 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.593897 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-image-import-ca\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.593928 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158f0672-c017-4e45-a564-96de81f21772-serving-cert\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.593953 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/93f5445e-7408-4d36-aa4c-a7461f94d75a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jkmhc\" (UID: \"93f5445e-7408-4d36-aa4c-a7461f94d75a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.594174 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.594572 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.594647 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.596689 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8719fb44-5fea-4fd5-a516-5d2ab11c221c-image-import-ca\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.598898 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.600591 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.600794 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb42c973-5e2c-4650-b259-e882429363c7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.606265 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.608448 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/93f5445e-7408-4d36-aa4c-a7461f94d75a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jkmhc\" (UID: \"93f5445e-7408-4d36-aa4c-a7461f94d75a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.609236 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.609515 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.611487 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nc7b9"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.612433 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.613504 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.614913 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.616769 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.617648 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.619255 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.621436 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9hdml"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.625553 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.629201 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ppzj4"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.629581 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9hdml" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.629875 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.629669 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78x9f"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.630513 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nfn89"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.634510 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pj7qx"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.635454 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.636191 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.636532 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.638140 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7mwg"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.639774 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.641101 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.641585 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.642288 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.643416 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wt2rf"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.645234 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.646484 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.647812 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-prk5g"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.649214 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.650617 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-l8pr7"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.651283 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-b6dxx"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.651409 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.654048 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.655740 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.656819 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x69t8"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.658816 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.660203 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rwxzh"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.661637 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sddqb"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.662878 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.664063 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wdprh"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.665572 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7twg2"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.667298 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.668889 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.669922 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.671239 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.672640 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.674000 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.674597 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csh8z\" (UniqueName: \"kubernetes.io/projected/4a5c3121-2765-47df-aa3f-22595e4b4ea9-kube-api-access-csh8z\") pod \"machine-api-operator-5694c8668f-5sz92\" (UID: \"4a5c3121-2765-47df-aa3f-22595e4b4ea9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.675727 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ppzj4"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.676842 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.678050 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x69t8"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.679328 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.682067 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.684804 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9hdml"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.686256 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.687389 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nc7b9"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.688610 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gbpm4"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.689446 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbpm4" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.689928 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gbpm4"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.694473 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.694503 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afb43c7e-87bc-4450-ad81-6a22161fb794-machine-approver-tls\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.694529 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.694991 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-serving-cert\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695034 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fd90c9-8767-4d22-b88e-33fafd8026d8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695053 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-oauth-serving-cert\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695071 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695102 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3dc29a-edba-48bc-823b-33b792856873-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695129 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-service-ca\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695151 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695221 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ntm\" (UniqueName: \"kubernetes.io/projected/2f7958cf-7c2d-4c29-bea8-5871267d5e16-kube-api-access-n7ntm\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695254 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jfkj\" (UniqueName: \"kubernetes.io/projected/9951c815-3e1f-40ad-8597-b558366ffc58-kube-api-access-6jfkj\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695289 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faaf8fb4-0dba-494d-8a14-2dba7901f50a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695313 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2787200d-e2f9-477b-bb3c-c1c40201f13a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695339 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4ea70ef-743e-44ef-804c-2f1321999baa-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695382 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6lpr\" (UniqueName: \"kubernetes.io/projected/f4ea70ef-743e-44ef-804c-2f1321999baa-kube-api-access-t6lpr\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695411 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-audit-policies\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695436 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-config\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695461 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-trusted-ca-bundle\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695486 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695515 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b494454-5efb-466f-81bd-754f7d6fa0a8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695544 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695573 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ed74e60-8c19-47d2-b760-a6f8678f38da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695600 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695673 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k46td\" (UniqueName: \"kubernetes.io/projected/158f0672-c017-4e45-a564-96de81f21772-kube-api-access-k46td\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695700 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faaf8fb4-0dba-494d-8a14-2dba7901f50a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695725 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfbvn\" (UniqueName: \"kubernetes.io/projected/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-kube-api-access-sfbvn\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695748 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhj97\" (UniqueName: \"kubernetes.io/projected/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-kube-api-access-dhj97\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695773 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-config\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695797 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb43c7e-87bc-4450-ad81-6a22161fb794-config\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695822 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3dc29a-edba-48bc-823b-33b792856873-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695869 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-dir\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695883 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-oauth-serving-cert\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695883 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695905 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw4z2\" (UniqueName: \"kubernetes.io/projected/69fd90c9-8767-4d22-b88e-33fafd8026d8-kube-api-access-sw4z2\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695943 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6m8\" (UniqueName: \"kubernetes.io/projected/afb43c7e-87bc-4450-ad81-6a22161fb794-kube-api-access-cz6m8\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.695974 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4891f319-eff4-4b7f-912e-45da55cb4fc2-srv-cert\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696085 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4ea70ef-743e-44ef-804c-2f1321999baa-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696106 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4z5l\" (UniqueName: \"kubernetes.io/projected/353bd1c5-bab8-42cc-925a-d9776ac60b6b-kube-api-access-g4z5l\") pod \"migrator-59844c95c7-9kz6f\" (UID: \"353bd1c5-bab8-42cc-925a-d9776ac60b6b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696151 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slf52\" (UniqueName: \"kubernetes.io/projected/14e81dbf-6c73-481c-b758-4c15cc0f3258-kube-api-access-slf52\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696183 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ed74e60-8c19-47d2-b760-a6f8678f38da-config\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696212 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-config\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696242 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-oauth-config\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696290 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9m2\" (UniqueName: \"kubernetes.io/projected/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-kube-api-access-rb9m2\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696317 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afb43c7e-87bc-4450-ad81-6a22161fb794-auth-proxy-config\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696344 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2787200d-e2f9-477b-bb3c-c1c40201f13a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696350 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-dir\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696391 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696423 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-service-ca-bundle\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696451 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-config\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696477 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-service-ca\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696490 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158f0672-c017-4e45-a564-96de81f21772-serving-cert\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696527 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-metrics-certs\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696562 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhgh\" (UniqueName: \"kubernetes.io/projected/9b494454-5efb-466f-81bd-754f7d6fa0a8-kube-api-access-tzhgh\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696590 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/083b5af3-1602-4add-a778-86b19df106c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rwxzh\" (UID: \"083b5af3-1602-4add-a778-86b19df106c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696619 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-ca\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696654 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-service-ca\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.696710 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhx5p\" (UniqueName: \"kubernetes.io/projected/48dabf7e-d1d7-48b6-bc70-5cc88cdcf994-kube-api-access-jhx5p\") pod \"downloads-7954f5f757-b6dxx\" (UID: \"48dabf7e-d1d7-48b6-bc70-5cc88cdcf994\") " pod="openshift-console/downloads-7954f5f757-b6dxx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.697421 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-config\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.697453 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-client-ca\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.697486 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lkpw\" (UniqueName: \"kubernetes.io/projected/c3bb2fc9-822c-4f53-98bf-70933744cf7f-kube-api-access-6lkpw\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.697509 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/afb43c7e-87bc-4450-ad81-6a22161fb794-auth-proxy-config\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.697890 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xntp\" (UniqueName: \"kubernetes.io/projected/8719fb44-5fea-4fd5-a516-5d2ab11c221c-kube-api-access-4xntp\") pod \"apiserver-76f77b778f-ks48g\" (UID: \"8719fb44-5fea-4fd5-a516-5d2ab11c221c\") " pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.697890 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/afb43c7e-87bc-4450-ad81-6a22161fb794-machine-approver-tls\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.697998 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.698186 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-config\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.698198 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.698346 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f7b66c5-b258-4314-b3a5-e08b958245b6-service-ca-bundle\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.698465 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.698552 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-default-certificate\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.698661 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-serving-cert\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.698744 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlbk4\" (UniqueName: \"kubernetes.io/projected/027da92d-9293-48ea-bd00-47b0fcb186fd-kube-api-access-hlbk4\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699278 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpknq\" (UniqueName: \"kubernetes.io/projected/9e36551d-13cd-4a75-a29b-658850b46cb8-kube-api-access-mpknq\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699342 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb43c7e-87bc-4450-ad81-6a22161fb794-config\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699434 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2787200d-e2f9-477b-bb3c-c1c40201f13a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699512 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699576 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699607 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699695 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699548 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-service-ca-bundle\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699756 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-config\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699847 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-client-ca\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699876 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.699770 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjqpz\" (UniqueName: \"kubernetes.io/projected/374ac04a-b37d-42c8-b0ca-e2647c86bc74-kube-api-access-jjqpz\") pod \"dns-operator-744455d44c-wdprh\" (UID: \"374ac04a-b37d-42c8-b0ca-e2647c86bc74\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.700012 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-trusted-ca-bundle\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.700029 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fd90c9-8767-4d22-b88e-33fafd8026d8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.700161 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faaf8fb4-0dba-494d-8a14-2dba7901f50a-config\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.700254 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.700328 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9951c815-3e1f-40ad-8597-b558366ffc58-serving-cert\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.700488 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/374ac04a-b37d-42c8-b0ca-e2647c86bc74-metrics-tls\") pod \"dns-operator-744455d44c-wdprh\" (UID: \"374ac04a-b37d-42c8-b0ca-e2647c86bc74\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.700485 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9951c815-3e1f-40ad-8597-b558366ffc58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.704725 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.705398 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.705980 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9ww\" (UniqueName: \"kubernetes.io/projected/4891f319-eff4-4b7f-912e-45da55cb4fc2-kube-api-access-ms9ww\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.706054 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4891f319-eff4-4b7f-912e-45da55cb4fc2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.706103 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-client-ca\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.705983 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.706612 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dh67\" (UniqueName: \"kubernetes.io/projected/9f7b66c5-b258-4314-b3a5-e08b958245b6-kube-api-access-4dh67\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.706671 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/027da92d-9293-48ea-bd00-47b0fcb186fd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.707586 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-client-ca\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.707698 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-config\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.707801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.707851 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/027da92d-9293-48ea-bd00-47b0fcb186fd-proxy-tls\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.707964 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed74e60-8c19-47d2-b760-a6f8678f38da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.708024 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6q9q\" (UniqueName: \"kubernetes.io/projected/083b5af3-1602-4add-a778-86b19df106c2-kube-api-access-l6q9q\") pod \"multus-admission-controller-857f4d67dd-rwxzh\" (UID: \"083b5af3-1602-4add-a778-86b19df106c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.708059 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-oauth-config\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.708078 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bb2fc9-822c-4f53-98bf-70933744cf7f-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.708128 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-config\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.708169 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-stats-auth\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.700549 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.708401 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b494454-5efb-466f-81bd-754f7d6fa0a8-metrics-tls\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.708469 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9bk\" (UniqueName: \"kubernetes.io/projected/8b3dc29a-edba-48bc-823b-33b792856873-kube-api-access-4z9bk\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.709138 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.709261 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-etcd-client\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.709423 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ea70ef-743e-44ef-804c-2f1321999baa-serving-cert\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.709975 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-serving-cert\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710062 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-serving-cert\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.709873 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-config\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.709913 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-config\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710143 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710291 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-trusted-ca\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710343 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-policies\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710426 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710464 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710517 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b494454-5efb-466f-81bd-754f7d6fa0a8-trusted-ca\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710569 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f7958cf-7c2d-4c29-bea8-5871267d5e16-audit-dir\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710610 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/027da92d-9293-48ea-bd00-47b0fcb186fd-images\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710654 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-client\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.710694 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-encryption-config\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.712166 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-trusted-ca\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.712678 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-policies\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.713175 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.713877 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.714155 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-serving-cert\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.714438 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158f0672-c017-4e45-a564-96de81f21772-serving-cert\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.716205 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4ea70ef-743e-44ef-804c-2f1321999baa-serving-cert\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.716218 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9951c815-3e1f-40ad-8597-b558366ffc58-serving-cert\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.716349 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-serving-cert\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.716412 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.717097 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.717969 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.718001 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7w4c\" (UniqueName: \"kubernetes.io/projected/93f5445e-7408-4d36-aa4c-a7461f94d75a-kube-api-access-c7w4c\") pod \"cluster-samples-operator-665b6dd947-jkmhc\" (UID: \"93f5445e-7408-4d36-aa4c-a7461f94d75a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.721170 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.723795 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bb2fc9-822c-4f53-98bf-70933744cf7f-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.736878 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl6wl\" (UniqueName: \"kubernetes.io/projected/bb42c973-5e2c-4650-b259-e882429363c7-kube-api-access-wl6wl\") pod \"openshift-apiserver-operator-796bbdcf4f-wzc5h\" (UID: \"bb42c973-5e2c-4650-b259-e882429363c7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.742242 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.752313 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.760513 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.762605 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.778110 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.782309 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.820760 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822033 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6q9q\" (UniqueName: \"kubernetes.io/projected/083b5af3-1602-4add-a778-86b19df106c2-kube-api-access-l6q9q\") pod \"multus-admission-controller-857f4d67dd-rwxzh\" (UID: \"083b5af3-1602-4add-a778-86b19df106c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822066 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-stats-auth\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822085 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b494454-5efb-466f-81bd-754f7d6fa0a8-metrics-tls\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822103 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9bk\" (UniqueName: \"kubernetes.io/projected/8b3dc29a-edba-48bc-823b-33b792856873-kube-api-access-4z9bk\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822122 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-etcd-client\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822143 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-serving-cert\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822160 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b494454-5efb-466f-81bd-754f7d6fa0a8-trusted-ca\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822174 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f7958cf-7c2d-4c29-bea8-5871267d5e16-audit-dir\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822189 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/027da92d-9293-48ea-bd00-47b0fcb186fd-images\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822204 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-client\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822233 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-encryption-config\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822265 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822281 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fd90c9-8767-4d22-b88e-33fafd8026d8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822303 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3dc29a-edba-48bc-823b-33b792856873-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822299 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f7958cf-7c2d-4c29-bea8-5871267d5e16-audit-dir\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822326 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7ntm\" (UniqueName: \"kubernetes.io/projected/2f7958cf-7c2d-4c29-bea8-5871267d5e16-kube-api-access-n7ntm\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.822352 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2787200d-e2f9-477b-bb3c-c1c40201f13a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823211 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faaf8fb4-0dba-494d-8a14-2dba7901f50a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823264 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-audit-policies\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823293 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b494454-5efb-466f-81bd-754f7d6fa0a8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823320 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ed74e60-8c19-47d2-b760-a6f8678f38da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823385 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faaf8fb4-0dba-494d-8a14-2dba7901f50a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823417 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3dc29a-edba-48bc-823b-33b792856873-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823468 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw4z2\" (UniqueName: \"kubernetes.io/projected/69fd90c9-8767-4d22-b88e-33fafd8026d8-kube-api-access-sw4z2\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823505 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4891f319-eff4-4b7f-912e-45da55cb4fc2-srv-cert\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823528 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4z5l\" (UniqueName: \"kubernetes.io/projected/353bd1c5-bab8-42cc-925a-d9776ac60b6b-kube-api-access-g4z5l\") pod \"migrator-59844c95c7-9kz6f\" (UID: \"353bd1c5-bab8-42cc-925a-d9776ac60b6b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823562 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ed74e60-8c19-47d2-b760-a6f8678f38da-config\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823604 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2787200d-e2f9-477b-bb3c-c1c40201f13a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823624 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb9m2\" (UniqueName: \"kubernetes.io/projected/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-kube-api-access-rb9m2\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823647 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-config\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823671 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-metrics-certs\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823690 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhgh\" (UniqueName: \"kubernetes.io/projected/9b494454-5efb-466f-81bd-754f7d6fa0a8-kube-api-access-tzhgh\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823712 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/083b5af3-1602-4add-a778-86b19df106c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rwxzh\" (UID: \"083b5af3-1602-4add-a778-86b19df106c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823730 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-ca\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823750 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-service-ca\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823806 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-default-certificate\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823839 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f7b66c5-b258-4314-b3a5-e08b958245b6-service-ca-bundle\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823872 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2787200d-e2f9-477b-bb3c-c1c40201f13a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823892 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-serving-cert\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823914 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlbk4\" (UniqueName: \"kubernetes.io/projected/027da92d-9293-48ea-bd00-47b0fcb186fd-kube-api-access-hlbk4\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823946 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjqpz\" (UniqueName: \"kubernetes.io/projected/374ac04a-b37d-42c8-b0ca-e2647c86bc74-kube-api-access-jjqpz\") pod \"dns-operator-744455d44c-wdprh\" (UID: \"374ac04a-b37d-42c8-b0ca-e2647c86bc74\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823972 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fd90c9-8767-4d22-b88e-33fafd8026d8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.823993 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faaf8fb4-0dba-494d-8a14-2dba7901f50a-config\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824016 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824042 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/374ac04a-b37d-42c8-b0ca-e2647c86bc74-metrics-tls\") pod \"dns-operator-744455d44c-wdprh\" (UID: \"374ac04a-b37d-42c8-b0ca-e2647c86bc74\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824066 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9ww\" (UniqueName: \"kubernetes.io/projected/4891f319-eff4-4b7f-912e-45da55cb4fc2-kube-api-access-ms9ww\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824092 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4891f319-eff4-4b7f-912e-45da55cb4fc2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824116 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dh67\" (UniqueName: \"kubernetes.io/projected/9f7b66c5-b258-4314-b3a5-e08b958245b6-kube-api-access-4dh67\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824138 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/027da92d-9293-48ea-bd00-47b0fcb186fd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824176 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/027da92d-9293-48ea-bd00-47b0fcb186fd-proxy-tls\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824203 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed74e60-8c19-47d2-b760-a6f8678f38da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.824638 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-config\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.825488 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-ca\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.825766 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/027da92d-9293-48ea-bd00-47b0fcb186fd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.826981 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.827161 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-audit-policies\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.827316 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-service-ca\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.827994 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2f7958cf-7c2d-4c29-bea8-5871267d5e16-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.830310 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-etcd-client\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.831634 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-serving-cert\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.831872 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-etcd-client\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.836650 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/374ac04a-b37d-42c8-b0ca-e2647c86bc74-metrics-tls\") pod \"dns-operator-744455d44c-wdprh\" (UID: \"374ac04a-b37d-42c8-b0ca-e2647c86bc74\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.836928 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-serving-cert\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.837354 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2f7958cf-7c2d-4c29-bea8-5871267d5e16-encryption-config\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.843743 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.850899 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/083b5af3-1602-4add-a778-86b19df106c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rwxzh\" (UID: \"083b5af3-1602-4add-a778-86b19df106c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.863443 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.885542 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.895072 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b3dc29a-edba-48bc-823b-33b792856873-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.904638 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.923377 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.943997 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.956915 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3dc29a-edba-48bc-823b-33b792856873-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.964551 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ks48g"] Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.966330 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 11:38:57 crc kubenswrapper[4922]: I0218 11:38:57.982300 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:57.999862 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc"] Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.003816 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.010386 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-metrics-certs\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.023581 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.038539 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-stats-auth\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.043324 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.046189 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h"] Feb 18 11:38:58 crc kubenswrapper[4922]: W0218 11:38:58.052081 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb42c973_5e2c_4650_b259_e882429363c7.slice/crio-ebbff29cb34fa65f1084ac364e70873185e148520ea29fdb2e74a7cc8a4e4e6c WatchSource:0}: Error finding container ebbff29cb34fa65f1084ac364e70873185e148520ea29fdb2e74a7cc8a4e4e6c: Status 404 returned error can't find the container with id ebbff29cb34fa65f1084ac364e70873185e148520ea29fdb2e74a7cc8a4e4e6c Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.061574 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.065313 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f7b66c5-b258-4314-b3a5-e08b958245b6-service-ca-bundle\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.081859 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.103016 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.109296 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f7b66c5-b258-4314-b3a5-e08b958245b6-default-certificate\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.122510 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.142977 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.162982 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.173380 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fd90c9-8767-4d22-b88e-33fafd8026d8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.182298 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.186616 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fd90c9-8767-4d22-b88e-33fafd8026d8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.201848 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.208793 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5sz92"] Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.222846 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.229228 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4891f319-eff4-4b7f-912e-45da55cb4fc2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.243297 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.262271 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.283006 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.301992 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.321979 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.323637 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/027da92d-9293-48ea-bd00-47b0fcb186fd-images\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.343257 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.361857 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.367792 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/027da92d-9293-48ea-bd00-47b0fcb186fd-proxy-tls\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.382699 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.402385 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.405765 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b494454-5efb-466f-81bd-754f7d6fa0a8-metrics-tls\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.422608 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.450668 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.457518 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b494454-5efb-466f-81bd-754f7d6fa0a8-trusted-ca\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.462200 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.487576 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.502778 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.522894 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.542682 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.562386 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.568865 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2787200d-e2f9-477b-bb3c-c1c40201f13a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.582116 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.590832 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2787200d-e2f9-477b-bb3c-c1c40201f13a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.600634 4922 request.go:700] Waited for 1.009919302s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&limit=500&resourceVersion=0 Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.602159 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.622920 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.643535 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.653823 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faaf8fb4-0dba-494d-8a14-2dba7901f50a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.663134 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.702882 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.710972 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed74e60-8c19-47d2-b760-a6f8678f38da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.722819 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.738890 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faaf8fb4-0dba-494d-8a14-2dba7901f50a-config\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.743380 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.754442 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" event={"ID":"bb42c973-5e2c-4650-b259-e882429363c7","Type":"ContainerStarted","Data":"ca8df75d49277b64cc8877f85c707ec70825cdc4a38fe081dd647da1feeb98df"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.754598 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" event={"ID":"bb42c973-5e2c-4650-b259-e882429363c7","Type":"ContainerStarted","Data":"ebbff29cb34fa65f1084ac364e70873185e148520ea29fdb2e74a7cc8a4e4e6c"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.756354 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" event={"ID":"4a5c3121-2765-47df-aa3f-22595e4b4ea9","Type":"ContainerStarted","Data":"ade595eb8b0948ce173e8e45817a8322764f6d260984a62fb381433fde6294fb"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.756448 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" event={"ID":"4a5c3121-2765-47df-aa3f-22595e4b4ea9","Type":"ContainerStarted","Data":"3bc876637c4a8bcadaa5fbf2bc10accf11c2546eae002cfe27908bbf1fbbbfc5"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.756459 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" event={"ID":"4a5c3121-2765-47df-aa3f-22595e4b4ea9","Type":"ContainerStarted","Data":"5dd5c00b4cbcf728ec53c4032a30beeb3e8a997561275183dea55345059ab4f5"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.758826 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" event={"ID":"93f5445e-7408-4d36-aa4c-a7461f94d75a","Type":"ContainerStarted","Data":"70146e719eec72378b931a7ae745ea7c36852f057a7bc8fdc74705f731e4ee3f"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.758864 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" event={"ID":"93f5445e-7408-4d36-aa4c-a7461f94d75a","Type":"ContainerStarted","Data":"33446b54746bccaa123f769e3809f84478a1c66a3bf1f31ae3381c80a093c8c5"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.758876 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" event={"ID":"93f5445e-7408-4d36-aa4c-a7461f94d75a","Type":"ContainerStarted","Data":"e550ec09022128d220868962aaff5d661987c7f6f617d145ed7f69f0bee436b8"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.762443 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.762753 4922 generic.go:334] "Generic (PLEG): container finished" podID="8719fb44-5fea-4fd5-a516-5d2ab11c221c" containerID="7e253d56157209fb39f9490236d9d75719c1e2515a17451fa024a2c0b0ee3f80" exitCode=0 Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.762794 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" event={"ID":"8719fb44-5fea-4fd5-a516-5d2ab11c221c","Type":"ContainerDied","Data":"7e253d56157209fb39f9490236d9d75719c1e2515a17451fa024a2c0b0ee3f80"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.762831 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" event={"ID":"8719fb44-5fea-4fd5-a516-5d2ab11c221c","Type":"ContainerStarted","Data":"20e7dce9a6acaf67ad64fbf40b2f8911ea37201b34483d36238d92a2148409e0"} Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.768467 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4891f319-eff4-4b7f-912e-45da55cb4fc2-srv-cert\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.783079 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.797116 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ed74e60-8c19-47d2-b760-a6f8678f38da-config\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.802441 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.822425 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.842444 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.861598 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.882583 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.910645 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.922268 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.942851 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.962788 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 11:38:58 crc kubenswrapper[4922]: I0218 11:38:58.983144 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.002104 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.022063 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.042628 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.062497 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.082499 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.102006 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.121856 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.141800 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.163116 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.183411 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.201882 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.222970 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.242927 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.262028 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.283390 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.302939 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.323165 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.341987 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.361966 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.383308 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.404914 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.422755 4922 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.442928 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.462452 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.485571 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.502419 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.523225 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.564158 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jfkj\" (UniqueName: \"kubernetes.io/projected/9951c815-3e1f-40ad-8597-b558366ffc58-kube-api-access-6jfkj\") pod \"authentication-operator-69f744f599-78x9f\" (UID: \"9951c815-3e1f-40ad-8597-b558366ffc58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.580115 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6lpr\" (UniqueName: \"kubernetes.io/projected/f4ea70ef-743e-44ef-804c-2f1321999baa-kube-api-access-t6lpr\") pod \"openshift-config-operator-7777fb866f-7twg2\" (UID: \"f4ea70ef-743e-44ef-804c-2f1321999baa\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.588590 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.607033 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k46td\" (UniqueName: \"kubernetes.io/projected/158f0672-c017-4e45-a564-96de81f21772-kube-api-access-k46td\") pod \"controller-manager-879f6c89f-pj7qx\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.622469 4922 request.go:700] Waited for 1.926391199s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.623665 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhj97\" (UniqueName: \"kubernetes.io/projected/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-kube-api-access-dhj97\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.645463 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfbvn\" (UniqueName: \"kubernetes.io/projected/ca3008b0-2ba6-4dfd-9fea-d1890e2af197-kube-api-access-sfbvn\") pod \"console-operator-58897d9998-sddqb\" (UID: \"ca3008b0-2ba6-4dfd-9fea-d1890e2af197\") " pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.657376 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6m8\" (UniqueName: \"kubernetes.io/projected/afb43c7e-87bc-4450-ad81-6a22161fb794-kube-api-access-cz6m8\") pod \"machine-approver-56656f9798-fs2lx\" (UID: \"afb43c7e-87bc-4450-ad81-6a22161fb794\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.681331 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slf52\" (UniqueName: \"kubernetes.io/projected/14e81dbf-6c73-481c-b758-4c15cc0f3258-kube-api-access-slf52\") pod \"console-f9d7485db-nfn89\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.699659 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lkpw\" (UniqueName: \"kubernetes.io/projected/c3bb2fc9-822c-4f53-98bf-70933744cf7f-kube-api-access-6lkpw\") pod \"route-controller-manager-6576b87f9c-v6m8j\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.716963 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhx5p\" (UniqueName: \"kubernetes.io/projected/48dabf7e-d1d7-48b6-bc70-5cc88cdcf994-kube-api-access-jhx5p\") pod \"downloads-7954f5f757-b6dxx\" (UID: \"48dabf7e-d1d7-48b6-bc70-5cc88cdcf994\") " pod="openshift-console/downloads-7954f5f757-b6dxx" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.721935 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.729884 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.738756 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.740998 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8890a31d-16c4-4c0a-a1f8-4ce9d3314a80-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xvfbm\" (UID: \"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.761098 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpknq\" (UniqueName: \"kubernetes.io/projected/9e36551d-13cd-4a75-a29b-658850b46cb8-kube-api-access-mpknq\") pod \"oauth-openshift-558db77b4-q7mwg\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.774490 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" event={"ID":"8719fb44-5fea-4fd5-a516-5d2ab11c221c","Type":"ContainerStarted","Data":"f5d8595b5efb56b0c783073cffd5e94a9e803bd2a79aa2e9dd4e875d38705733"} Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.774542 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" event={"ID":"8719fb44-5fea-4fd5-a516-5d2ab11c221c","Type":"ContainerStarted","Data":"2d3ac63287f54d65348280c81b34792f9bb2289093e3618f46bac52340509cb5"} Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.777260 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6q9q\" (UniqueName: \"kubernetes.io/projected/083b5af3-1602-4add-a778-86b19df106c2-kube-api-access-l6q9q\") pod \"multus-admission-controller-857f4d67dd-rwxzh\" (UID: \"083b5af3-1602-4add-a778-86b19df106c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.778600 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" event={"ID":"afb43c7e-87bc-4450-ad81-6a22161fb794","Type":"ContainerStarted","Data":"a6cdb6374783bfcbb321fe6dc0b32c039af0469cdf0fb76100ab334fb5f8f006"} Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.793925 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.802431 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7ntm\" (UniqueName: \"kubernetes.io/projected/2f7958cf-7c2d-4c29-bea8-5871267d5e16-kube-api-access-n7ntm\") pod \"apiserver-7bbb656c7d-vhdd8\" (UID: \"2f7958cf-7c2d-4c29-bea8-5871267d5e16\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.824425 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb9m2\" (UniqueName: \"kubernetes.io/projected/6ae30939-0d1c-4856-86e0-2b0b4797fa6a-kube-api-access-rb9m2\") pod \"etcd-operator-b45778765-prk5g\" (UID: \"6ae30939-0d1c-4856-86e0-2b0b4797fa6a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.830513 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.839602 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhgh\" (UniqueName: \"kubernetes.io/projected/9b494454-5efb-466f-81bd-754f7d6fa0a8-kube-api-access-tzhgh\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.861203 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4z5l\" (UniqueName: \"kubernetes.io/projected/353bd1c5-bab8-42cc-925a-d9776ac60b6b-kube-api-access-g4z5l\") pod \"migrator-59844c95c7-9kz6f\" (UID: \"353bd1c5-bab8-42cc-925a-d9776ac60b6b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.890925 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b494454-5efb-466f-81bd-754f7d6fa0a8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9b55m\" (UID: \"9b494454-5efb-466f-81bd-754f7d6fa0a8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.894838 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.897027 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.903950 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dh67\" (UniqueName: \"kubernetes.io/projected/9f7b66c5-b258-4314-b3a5-e08b958245b6-kube-api-access-4dh67\") pod \"router-default-5444994796-ktkz9\" (UID: \"9f7b66c5-b258-4314-b3a5-e08b958245b6\") " pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.919766 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9ww\" (UniqueName: \"kubernetes.io/projected/4891f319-eff4-4b7f-912e-45da55cb4fc2-kube-api-access-ms9ww\") pod \"olm-operator-6b444d44fb-zm5zz\" (UID: \"4891f319-eff4-4b7f-912e-45da55cb4fc2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.925768 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.937692 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.940641 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faaf8fb4-0dba-494d-8a14-2dba7901f50a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wm5bk\" (UID: \"faaf8fb4-0dba-494d-8a14-2dba7901f50a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.953725 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.961013 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ed74e60-8c19-47d2-b760-a6f8678f38da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r84r2\" (UID: \"0ed74e60-8c19-47d2-b760-a6f8678f38da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.977456 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-b6dxx" Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.988540 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7twg2"] Feb 18 11:38:59 crc kubenswrapper[4922]: I0218 11:38:59.989610 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9bk\" (UniqueName: \"kubernetes.io/projected/8b3dc29a-edba-48bc-823b-33b792856873-kube-api-access-4z9bk\") pod \"openshift-controller-manager-operator-756b6f6bc6-nbzck\" (UID: \"8b3dc29a-edba-48bc-823b-33b792856873\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.003095 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlbk4\" (UniqueName: \"kubernetes.io/projected/027da92d-9293-48ea-bd00-47b0fcb186fd-kube-api-access-hlbk4\") pod \"machine-config-operator-74547568cd-jkv6t\" (UID: \"027da92d-9293-48ea-bd00-47b0fcb186fd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.022744 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw4z2\" (UniqueName: \"kubernetes.io/projected/69fd90c9-8767-4d22-b88e-33fafd8026d8-kube-api-access-sw4z2\") pod \"kube-storage-version-migrator-operator-b67b599dd-js5gq\" (UID: \"69fd90c9-8767-4d22-b88e-33fafd8026d8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.049337 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.051104 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nfn89"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.051140 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78x9f"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.051321 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjqpz\" (UniqueName: \"kubernetes.io/projected/374ac04a-b37d-42c8-b0ca-e2647c86bc74-kube-api-access-jjqpz\") pod \"dns-operator-744455d44c-wdprh\" (UID: \"374ac04a-b37d-42c8-b0ca-e2647c86bc74\") " pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.064060 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2787200d-e2f9-477b-bb3c-c1c40201f13a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gst7r\" (UID: \"2787200d-e2f9-477b-bb3c-c1c40201f13a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.071287 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:39:00 crc kubenswrapper[4922]: W0218 11:39:00.083470 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14e81dbf_6c73_481c_b758_4c15cc0f3258.slice/crio-e3e0521f1d2586e618292c2cbf0b96f0a5f28185a418b592c2676c55e3372f97 WatchSource:0}: Error finding container e3e0521f1d2586e618292c2cbf0b96f0a5f28185a418b592c2676c55e3372f97: Status 404 returned error can't find the container with id e3e0521f1d2586e618292c2cbf0b96f0a5f28185a418b592c2676c55e3372f97 Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.094532 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.114446 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.114744 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.134014 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.135005 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pj7qx"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.135880 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rwxzh"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.146193 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.163920 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.167805 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-tls\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.167893 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56jvt\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-kube-api-access-56jvt\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.167930 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5z7\" (UniqueName: \"kubernetes.io/projected/642557ec-2e08-451b-8a4c-b4e8cf88f048-kube-api-access-np5z7\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.167990 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168027 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76fv\" (UniqueName: \"kubernetes.io/projected/5566256e-1d22-41b3-8c9b-5765acbf0425-kube-api-access-t76fv\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168057 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/642557ec-2e08-451b-8a4c-b4e8cf88f048-proxy-tls\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168081 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5566256e-1d22-41b3-8c9b-5765acbf0425-profile-collector-cert\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168108 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-certificates\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168170 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-bound-sa-token\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168223 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/642557ec-2e08-451b-8a4c-b4e8cf88f048-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168273 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5566256e-1d22-41b3-8c9b-5765acbf0425-srv-cert\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168304 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168333 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-trusted-ca\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.168412 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.168947 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:00.668926787 +0000 UTC m=+142.396631087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.173960 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.183134 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.188079 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.234319 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" Feb 18 11:39:00 crc kubenswrapper[4922]: W0218 11:39:00.238126 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod158f0672_c017_4e45_a564_96de81f21772.slice/crio-2d992c5fc4f578358729ef2e1374662b53a6ba4618dfe838d93d24e0c5e5a820 WatchSource:0}: Error finding container 2d992c5fc4f578358729ef2e1374662b53a6ba4618dfe838d93d24e0c5e5a820: Status 404 returned error can't find the container with id 2d992c5fc4f578358729ef2e1374662b53a6ba4618dfe838d93d24e0c5e5a820 Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.269706 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.270347 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-csi-data-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.270436 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5566256e-1d22-41b3-8c9b-5765acbf0425-srv-cert\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.270607 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdbmc\" (UniqueName: \"kubernetes.io/projected/75c707c4-5c62-438f-8312-2307d3ef0ba8-kube-api-access-qdbmc\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.270641 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dfc07c-fa4c-48ad-9904-2a767310c6ac-cert\") pod \"ingress-canary-9hdml\" (UID: \"66dfc07c-fa4c-48ad-9904-2a767310c6ac\") " pod="openshift-ingress-canary/ingress-canary-9hdml" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.270693 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271163 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-trusted-ca\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271318 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4snbf\" (UniqueName: \"kubernetes.io/projected/a62974f3-68b4-451d-9887-bf8af554ace0-kube-api-access-4snbf\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271383 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-tls\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271401 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271417 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5ql\" (UniqueName: \"kubernetes.io/projected/aa233e7a-8a71-495c-b696-2f3dac9f0ada-kube-api-access-2h5ql\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271472 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56jvt\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-kube-api-access-56jvt\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271542 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zrhx\" (UniqueName: \"kubernetes.io/projected/913f9471-59b8-4494-964d-0db4086d77ab-kube-api-access-9zrhx\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271584 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-registration-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271648 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5z7\" (UniqueName: \"kubernetes.io/projected/642557ec-2e08-451b-8a4c-b4e8cf88f048-kube-api-access-np5z7\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271739 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2v8\" (UniqueName: \"kubernetes.io/projected/d17bc269-b566-4738-ac8f-354d91dd9245-kube-api-access-7p2v8\") pod \"package-server-manager-789f6589d5-rj4b9\" (UID: \"d17bc269-b566-4738-ac8f-354d91dd9245\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271794 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c707c4-5c62-438f-8312-2307d3ef0ba8-config-volume\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271831 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/984302b1-545d-474c-a808-8c8f716e580e-node-bootstrap-token\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271919 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-socket-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271959 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd5c\" (UniqueName: \"kubernetes.io/projected/a768634b-1586-4ba2-9a05-6a88f5befea1-kube-api-access-pfd5c\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.271975 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-metrics-tls\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272036 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/913f9471-59b8-4494-964d-0db4086d77ab-webhook-cert\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272052 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272076 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75c707c4-5c62-438f-8312-2307d3ef0ba8-secret-volume\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272132 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272281 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76fv\" (UniqueName: \"kubernetes.io/projected/5566256e-1d22-41b3-8c9b-5765acbf0425-kube-api-access-t76fv\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272334 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/642557ec-2e08-451b-8a4c-b4e8cf88f048-proxy-tls\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272400 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/913f9471-59b8-4494-964d-0db4086d77ab-tmpfs\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272735 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/984302b1-545d-474c-a808-8c8f716e580e-certs\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.272798 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-config-volume\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.273440 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5566256e-1d22-41b3-8c9b-5765acbf0425-profile-collector-cert\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.273765 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-certificates\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.273956 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/adf4d88c-a19b-49bf-bb62-eef23b55efae-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6msr\" (UID: \"adf4d88c-a19b-49bf-bb62-eef23b55efae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.274303 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-bound-sa-token\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.274858 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vhsm\" (UniqueName: \"kubernetes.io/projected/66dfc07c-fa4c-48ad-9904-2a767310c6ac-kube-api-access-9vhsm\") pod \"ingress-canary-9hdml\" (UID: \"66dfc07c-fa4c-48ad-9904-2a767310c6ac\") " pod="openshift-ingress-canary/ingress-canary-9hdml" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.275016 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a62974f3-68b4-451d-9887-bf8af554ace0-signing-cabundle\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.275154 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d17bc269-b566-4738-ac8f-354d91dd9245-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rj4b9\" (UID: \"d17bc269-b566-4738-ac8f-354d91dd9245\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.275319 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817e4164-fa3e-41e5-8638-6a512b9d28bf-config\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.275351 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwt9m\" (UniqueName: \"kubernetes.io/projected/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-kube-api-access-wwt9m\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.275553 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfssv\" (UniqueName: \"kubernetes.io/projected/adf4d88c-a19b-49bf-bb62-eef23b55efae-kube-api-access-qfssv\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6msr\" (UID: \"adf4d88c-a19b-49bf-bb62-eef23b55efae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.276421 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:00.776393517 +0000 UTC m=+142.504097597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.277258 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-certificates\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.275601 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/817e4164-fa3e-41e5-8638-6a512b9d28bf-serving-cert\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.287560 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh5mm\" (UniqueName: \"kubernetes.io/projected/817e4164-fa3e-41e5-8638-6a512b9d28bf-kube-api-access-lh5mm\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.291524 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.292025 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/913f9471-59b8-4494-964d-0db4086d77ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.292188 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/642557ec-2e08-451b-8a4c-b4e8cf88f048-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.292233 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkcgp\" (UniqueName: \"kubernetes.io/projected/984302b1-545d-474c-a808-8c8f716e580e-kube-api-access-qkcgp\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.292817 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a62974f3-68b4-451d-9887-bf8af554ace0-signing-key\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.293256 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/642557ec-2e08-451b-8a4c-b4e8cf88f048-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.293282 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-mountpoint-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.293327 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-plugins-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.293436 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-trusted-ca\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.301515 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-tls\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.304185 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.304445 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5566256e-1d22-41b3-8c9b-5765acbf0425-srv-cert\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.305347 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5566256e-1d22-41b3-8c9b-5765acbf0425-profile-collector-cert\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.314506 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/642557ec-2e08-451b-8a4c-b4e8cf88f048-proxy-tls\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.328213 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-bound-sa-token\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.346723 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76fv\" (UniqueName: \"kubernetes.io/projected/5566256e-1d22-41b3-8c9b-5765acbf0425-kube-api-access-t76fv\") pod \"catalog-operator-68c6474976-p4qtq\" (UID: \"5566256e-1d22-41b3-8c9b-5765acbf0425\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.394862 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.394918 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4snbf\" (UniqueName: \"kubernetes.io/projected/a62974f3-68b4-451d-9887-bf8af554ace0-kube-api-access-4snbf\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.394940 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.394965 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5ql\" (UniqueName: \"kubernetes.io/projected/aa233e7a-8a71-495c-b696-2f3dac9f0ada-kube-api-access-2h5ql\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.394989 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zrhx\" (UniqueName: \"kubernetes.io/projected/913f9471-59b8-4494-964d-0db4086d77ab-kube-api-access-9zrhx\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395007 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-registration-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395030 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2v8\" (UniqueName: \"kubernetes.io/projected/d17bc269-b566-4738-ac8f-354d91dd9245-kube-api-access-7p2v8\") pod \"package-server-manager-789f6589d5-rj4b9\" (UID: \"d17bc269-b566-4738-ac8f-354d91dd9245\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395046 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c707c4-5c62-438f-8312-2307d3ef0ba8-config-volume\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395069 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/984302b1-545d-474c-a808-8c8f716e580e-node-bootstrap-token\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395087 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfd5c\" (UniqueName: \"kubernetes.io/projected/a768634b-1586-4ba2-9a05-6a88f5befea1-kube-api-access-pfd5c\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395103 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-socket-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395118 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/913f9471-59b8-4494-964d-0db4086d77ab-webhook-cert\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395133 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-metrics-tls\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395151 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75c707c4-5c62-438f-8312-2307d3ef0ba8-secret-volume\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395167 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395191 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/913f9471-59b8-4494-964d-0db4086d77ab-tmpfs\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395206 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/984302b1-545d-474c-a808-8c8f716e580e-certs\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395221 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-config-volume\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395243 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/adf4d88c-a19b-49bf-bb62-eef23b55efae-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6msr\" (UID: \"adf4d88c-a19b-49bf-bb62-eef23b55efae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395265 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a62974f3-68b4-451d-9887-bf8af554ace0-signing-cabundle\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395280 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d17bc269-b566-4738-ac8f-354d91dd9245-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rj4b9\" (UID: \"d17bc269-b566-4738-ac8f-354d91dd9245\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395295 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vhsm\" (UniqueName: \"kubernetes.io/projected/66dfc07c-fa4c-48ad-9904-2a767310c6ac-kube-api-access-9vhsm\") pod \"ingress-canary-9hdml\" (UID: \"66dfc07c-fa4c-48ad-9904-2a767310c6ac\") " pod="openshift-ingress-canary/ingress-canary-9hdml" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395313 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwt9m\" (UniqueName: \"kubernetes.io/projected/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-kube-api-access-wwt9m\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395328 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817e4164-fa3e-41e5-8638-6a512b9d28bf-config\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395351 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfssv\" (UniqueName: \"kubernetes.io/projected/adf4d88c-a19b-49bf-bb62-eef23b55efae-kube-api-access-qfssv\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6msr\" (UID: \"adf4d88c-a19b-49bf-bb62-eef23b55efae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395371 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/817e4164-fa3e-41e5-8638-6a512b9d28bf-serving-cert\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395411 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/913f9471-59b8-4494-964d-0db4086d77ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395434 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh5mm\" (UniqueName: \"kubernetes.io/projected/817e4164-fa3e-41e5-8638-6a512b9d28bf-kube-api-access-lh5mm\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395455 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkcgp\" (UniqueName: \"kubernetes.io/projected/984302b1-545d-474c-a808-8c8f716e580e-kube-api-access-qkcgp\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395478 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a62974f3-68b4-451d-9887-bf8af554ace0-signing-key\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395503 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-mountpoint-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395517 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-plugins-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395535 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-csi-data-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395555 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdbmc\" (UniqueName: \"kubernetes.io/projected/75c707c4-5c62-438f-8312-2307d3ef0ba8-kube-api-access-qdbmc\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.395570 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dfc07c-fa4c-48ad-9904-2a767310c6ac-cert\") pod \"ingress-canary-9hdml\" (UID: \"66dfc07c-fa4c-48ad-9904-2a767310c6ac\") " pod="openshift-ingress-canary/ingress-canary-9hdml" Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.396155 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:00.896120407 +0000 UTC m=+142.623824487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.399405 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-socket-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.399649 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-registration-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.401413 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c707c4-5c62-438f-8312-2307d3ef0ba8-config-volume\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.402955 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-csi-data-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.403054 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-mountpoint-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.403066 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a768634b-1586-4ba2-9a05-6a88f5befea1-plugins-dir\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.404045 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a62974f3-68b4-451d-9887-bf8af554ace0-signing-cabundle\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.404413 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/913f9471-59b8-4494-964d-0db4086d77ab-tmpfs\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.404650 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.405078 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/817e4164-fa3e-41e5-8638-6a512b9d28bf-config\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.405936 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-config-volume\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.412715 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/913f9471-59b8-4494-964d-0db4086d77ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.412828 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5z7\" (UniqueName: \"kubernetes.io/projected/642557ec-2e08-451b-8a4c-b4e8cf88f048-kube-api-access-np5z7\") pod \"machine-config-controller-84d6567774-h8l7q\" (UID: \"642557ec-2e08-451b-8a4c-b4e8cf88f048\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.432422 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d17bc269-b566-4738-ac8f-354d91dd9245-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rj4b9\" (UID: \"d17bc269-b566-4738-ac8f-354d91dd9245\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.433259 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-metrics-tls\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.433354 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a62974f3-68b4-451d-9887-bf8af554ace0-signing-key\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.433940 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/984302b1-545d-474c-a808-8c8f716e580e-certs\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.435026 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/817e4164-fa3e-41e5-8638-6a512b9d28bf-serving-cert\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.435260 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66dfc07c-fa4c-48ad-9904-2a767310c6ac-cert\") pod \"ingress-canary-9hdml\" (UID: \"66dfc07c-fa4c-48ad-9904-2a767310c6ac\") " pod="openshift-ingress-canary/ingress-canary-9hdml" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.435287 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/adf4d88c-a19b-49bf-bb62-eef23b55efae-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6msr\" (UID: \"adf4d88c-a19b-49bf-bb62-eef23b55efae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.435448 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56jvt\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-kube-api-access-56jvt\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.436620 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.436850 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.437000 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/984302b1-545d-474c-a808-8c8f716e580e-node-bootstrap-token\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.437477 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75c707c4-5c62-438f-8312-2307d3ef0ba8-secret-volume\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.440838 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4snbf\" (UniqueName: \"kubernetes.io/projected/a62974f3-68b4-451d-9887-bf8af554ace0-kube-api-access-4snbf\") pod \"service-ca-9c57cc56f-ppzj4\" (UID: \"a62974f3-68b4-451d-9887-bf8af554ace0\") " pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.450404 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/913f9471-59b8-4494-964d-0db4086d77ab-webhook-cert\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.456813 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.464371 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5ql\" (UniqueName: \"kubernetes.io/projected/aa233e7a-8a71-495c-b696-2f3dac9f0ada-kube-api-access-2h5ql\") pod \"marketplace-operator-79b997595-nc7b9\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.480213 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zrhx\" (UniqueName: \"kubernetes.io/projected/913f9471-59b8-4494-964d-0db4086d77ab-kube-api-access-9zrhx\") pod \"packageserver-d55dfcdfc-vgs8b\" (UID: \"913f9471-59b8-4494-964d-0db4086d77ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.497073 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.497590 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:00.997555935 +0000 UTC m=+142.725260015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.512668 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2v8\" (UniqueName: \"kubernetes.io/projected/d17bc269-b566-4738-ac8f-354d91dd9245-kube-api-access-7p2v8\") pod \"package-server-manager-789f6589d5-rj4b9\" (UID: \"d17bc269-b566-4738-ac8f-354d91dd9245\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.524061 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7mwg"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.531007 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.546483 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.549958 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh5mm\" (UniqueName: \"kubernetes.io/projected/817e4164-fa3e-41e5-8638-6a512b9d28bf-kube-api-access-lh5mm\") pod \"service-ca-operator-777779d784-v4cwm\" (UID: \"817e4164-fa3e-41e5-8638-6a512b9d28bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.555610 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkcgp\" (UniqueName: \"kubernetes.io/projected/984302b1-545d-474c-a808-8c8f716e580e-kube-api-access-qkcgp\") pod \"machine-config-server-l8pr7\" (UID: \"984302b1-545d-474c-a808-8c8f716e580e\") " pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.560567 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.574731 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.598488 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfd5c\" (UniqueName: \"kubernetes.io/projected/a768634b-1586-4ba2-9a05-6a88f5befea1-kube-api-access-pfd5c\") pod \"csi-hostpathplugin-x69t8\" (UID: \"a768634b-1586-4ba2-9a05-6a88f5befea1\") " pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.599854 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.600209 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.100193153 +0000 UTC m=+142.827897233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.601738 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwt9m\" (UniqueName: \"kubernetes.io/projected/d523bc23-dd6a-4d1f-b72b-2070ecce0cde-kube-api-access-wwt9m\") pod \"dns-default-gbpm4\" (UID: \"d523bc23-dd6a-4d1f-b72b-2070ecce0cde\") " pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.605770 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.619113 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vhsm\" (UniqueName: \"kubernetes.io/projected/66dfc07c-fa4c-48ad-9904-2a767310c6ac-kube-api-access-9vhsm\") pod \"ingress-canary-9hdml\" (UID: \"66dfc07c-fa4c-48ad-9904-2a767310c6ac\") " pod="openshift-ingress-canary/ingress-canary-9hdml" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.638940 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9hdml" Feb 18 11:39:00 crc kubenswrapper[4922]: W0218 11:39:00.639353 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e36551d_13cd_4a75_a29b_658850b46cb8.slice/crio-f32dc9e435ded17fa5224c75138342bd559fb562ee11ecfd7567b73083fd4cff WatchSource:0}: Error finding container f32dc9e435ded17fa5224c75138342bd559fb562ee11ecfd7567b73083fd4cff: Status 404 returned error can't find the container with id f32dc9e435ded17fa5224c75138342bd559fb562ee11ecfd7567b73083fd4cff Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.639513 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.654228 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l8pr7" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.659817 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdbmc\" (UniqueName: \"kubernetes.io/projected/75c707c4-5c62-438f-8312-2307d3ef0ba8-kube-api-access-qdbmc\") pod \"collect-profiles-29523570-k54zv\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.663471 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfssv\" (UniqueName: \"kubernetes.io/projected/adf4d88c-a19b-49bf-bb62-eef23b55efae-kube-api-access-qfssv\") pod \"control-plane-machine-set-operator-78cbb6b69f-b6msr\" (UID: \"adf4d88c-a19b-49bf-bb62-eef23b55efae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" Feb 18 11:39:00 crc kubenswrapper[4922]: W0218 11:39:00.669975 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod353bd1c5_bab8_42cc_925a_d9776ac60b6b.slice/crio-270a203474ba84f435ec22b9c71503106e1a4c970e2a080f19dca4ce0186bede WatchSource:0}: Error finding container 270a203474ba84f435ec22b9c71503106e1a4c970e2a080f19dca4ce0186bede: Status 404 returned error can't find the container with id 270a203474ba84f435ec22b9c71503106e1a4c970e2a080f19dca4ce0186bede Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.682606 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.690323 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.700518 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.700612 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.200593124 +0000 UTC m=+142.928297204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.700743 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.701020 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.201011425 +0000 UTC m=+142.928715505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.706475 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sddqb"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.741874 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.743616 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-b6dxx"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.746596 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm"] Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.801785 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.802158 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.302126854 +0000 UTC m=+143.029830934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.844886 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" event={"ID":"9951c815-3e1f-40ad-8597-b558366ffc58","Type":"ContainerStarted","Data":"ff075026a738e076b481a73f3a75acb8d6738e1e24148acad8afe689bd021563"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.845345 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" event={"ID":"9951c815-3e1f-40ad-8597-b558366ffc58","Type":"ContainerStarted","Data":"4db9590941a93d410d1f3e94ff9fc3e772d8bf8bdac79a6ee60fb31a1b40a31b"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.851567 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nfn89" event={"ID":"14e81dbf-6c73-481c-b758-4c15cc0f3258","Type":"ContainerStarted","Data":"e3e0521f1d2586e618292c2cbf0b96f0a5f28185a418b592c2676c55e3372f97"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.853488 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" event={"ID":"353bd1c5-bab8-42cc-925a-d9776ac60b6b","Type":"ContainerStarted","Data":"270a203474ba84f435ec22b9c71503106e1a4c970e2a080f19dca4ce0186bede"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.859080 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ktkz9" event={"ID":"9f7b66c5-b258-4314-b3a5-e08b958245b6","Type":"ContainerStarted","Data":"4dc87b2fbe3355753c06c4226a76516acee6e6b7368c2539573d63e8626cd376"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.876547 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" event={"ID":"9e36551d-13cd-4a75-a29b-658850b46cb8","Type":"ContainerStarted","Data":"f32dc9e435ded17fa5224c75138342bd559fb562ee11ecfd7567b73083fd4cff"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.885097 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" event={"ID":"083b5af3-1602-4add-a778-86b19df106c2","Type":"ContainerStarted","Data":"ced3405a371a1faa7601431d9e2a5855cd3c8213c521cab1927ce6b56082b1db"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.903848 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:00 crc kubenswrapper[4922]: E0218 11:39:00.904179 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.404168597 +0000 UTC m=+143.131872677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.907476 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" event={"ID":"afb43c7e-87bc-4450-ad81-6a22161fb794","Type":"ContainerStarted","Data":"7bd8c637f253795bbee1f4903cb5578e614d80fa7ebd4b253117e3d49cebab10"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.913807 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.916534 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" event={"ID":"158f0672-c017-4e45-a564-96de81f21772","Type":"ContainerStarted","Data":"2d992c5fc4f578358729ef2e1374662b53a6ba4618dfe838d93d24e0c5e5a820"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.917855 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" event={"ID":"f4ea70ef-743e-44ef-804c-2f1321999baa","Type":"ContainerStarted","Data":"7395b686de9c51569b28e03f4f310f89d8701a436f73b30737cce70fa5185b5b"} Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.944985 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:00 crc kubenswrapper[4922]: I0218 11:39:00.977237 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5sz92" podStartSLOduration=121.977220906 podStartE2EDuration="2m1.977220906s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:00.975746409 +0000 UTC m=+142.703450489" watchObservedRunningTime="2026-02-18 11:39:00.977220906 +0000 UTC m=+142.704924986" Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.004238 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.004634 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.504620999 +0000 UTC m=+143.232325079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:01 crc kubenswrapper[4922]: W0218 11:39:01.028067 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3bb2fc9_822c_4f53_98bf_70933744cf7f.slice/crio-52db1346d3e5c28577ccc9c9f0dc56c1b32dc4dc3a9883af1710dab9823da00c WatchSource:0}: Error finding container 52db1346d3e5c28577ccc9c9f0dc56c1b32dc4dc3a9883af1710dab9823da00c: Status 404 returned error can't find the container with id 52db1346d3e5c28577ccc9c9f0dc56c1b32dc4dc3a9883af1710dab9823da00c Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.114176 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.114512 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.614501311 +0000 UTC m=+143.342205391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.216609 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.216910 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.716893202 +0000 UTC m=+143.444597282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.262854 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.277410 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wdprh"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.297116 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.318134 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.318533 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.818521125 +0000 UTC m=+143.546225205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.355669 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-prk5g"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.376349 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.384194 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" podStartSLOduration=123.384179947 podStartE2EDuration="2m3.384179947s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:01.38234412 +0000 UTC m=+143.110048190" watchObservedRunningTime="2026-02-18 11:39:01.384179947 +0000 UTC m=+143.111884027" Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.419001 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.419329 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:01.919306476 +0000 UTC m=+143.647010626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.450213 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.455174 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.457502 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.521536 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.522209 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:02.02219669 +0000 UTC m=+143.749900770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.602162 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.628869 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.628915 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz"] Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.629262 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.629641 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:02.129626669 +0000 UTC m=+143.857330749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.653716 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jkmhc" podStartSLOduration=123.653700259 podStartE2EDuration="2m3.653700259s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:01.652709194 +0000 UTC m=+143.380413274" watchObservedRunningTime="2026-02-18 11:39:01.653700259 +0000 UTC m=+143.381404339" Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.731344 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.731806 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:02.231789235 +0000 UTC m=+143.959493315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:01 crc kubenswrapper[4922]: I0218 11:39:01.832519 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:01 crc kubenswrapper[4922]: E0218 11:39:01.832888 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:02.332873834 +0000 UTC m=+144.060577914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.228446 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.231016 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:02.730585541 +0000 UTC m=+144.458289661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.244546 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-b6dxx" event={"ID":"48dabf7e-d1d7-48b6-bc70-5cc88cdcf994","Type":"ContainerStarted","Data":"021babde74a7b3b31a99a40a1dd014b4826e7ac93aab6ba43a2790a2d56fbc9a"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.248774 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" event={"ID":"2f7958cf-7c2d-4c29-bea8-5871267d5e16","Type":"ContainerStarted","Data":"c18498deb1197edbc6261c83110dcc9bce5f86b0dcdab3ffc97e6e31308e59cd"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.250683 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" event={"ID":"083b5af3-1602-4add-a778-86b19df106c2","Type":"ContainerStarted","Data":"d59ca7d52e30ccaf9a6eb7b58026ee4e77061cf64e6752c593e2f43c2e1a8036"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.258036 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" event={"ID":"69fd90c9-8767-4d22-b88e-33fafd8026d8","Type":"ContainerStarted","Data":"611ba1ac340163e184a22ea72cc9a93a243aff084edde9d42aa62f8f0db84524"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.271352 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" event={"ID":"158f0672-c017-4e45-a564-96de81f21772","Type":"ContainerStarted","Data":"5e927c6e4399c8def57efddaefd76cf053c5e57aa111182c3f59a17801fceec0"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.271623 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.274650 4922 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pj7qx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.274736 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" podUID="158f0672-c017-4e45-a564-96de81f21772" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.280246 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" event={"ID":"faaf8fb4-0dba-494d-8a14-2dba7901f50a","Type":"ContainerStarted","Data":"0d69a5b463245362a2382e19ad22752b28cbd883a56d832e09e61167a9957ad5"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.282029 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sddqb" event={"ID":"ca3008b0-2ba6-4dfd-9fea-d1890e2af197","Type":"ContainerStarted","Data":"00ae2fd6f157a70e9a2ab233d003233d31639ccbb06494ef24d137c6b612295d"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.283238 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" event={"ID":"2787200d-e2f9-477b-bb3c-c1c40201f13a","Type":"ContainerStarted","Data":"79767b99e85384af50c8693dba2a682ded6dd1b9ae9a283014045ca0afedee09"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.284355 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" event={"ID":"027da92d-9293-48ea-bd00-47b0fcb186fd","Type":"ContainerStarted","Data":"5e0d376649c250a25133e6a296c4532a0a2731e6566d3d67e8ebb01ac4310734"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.301156 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" event={"ID":"353bd1c5-bab8-42cc-925a-d9776ac60b6b","Type":"ContainerStarted","Data":"b9a00451ff85851fa144629a5b4122f5e54d94ccbd87a0412a51fc58d7870fa3"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.306197 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nfn89" event={"ID":"14e81dbf-6c73-481c-b758-4c15cc0f3258","Type":"ContainerStarted","Data":"af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.311666 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ktkz9" event={"ID":"9f7b66c5-b258-4314-b3a5-e08b958245b6","Type":"ContainerStarted","Data":"b9d1cb8abd2e3a3d807243a687c0d3ea717d5cea3dc89c6854b5d6ddaa23869a"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.313074 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" event={"ID":"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80","Type":"ContainerStarted","Data":"8e5bf2b992167ee3b0a7a2fd76cdb65f2030e45b2bc361fe1228f36de8c8e976"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.317488 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.318457 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" event={"ID":"374ac04a-b37d-42c8-b0ca-e2647c86bc74","Type":"ContainerStarted","Data":"a44e76d8d0f5bfda5fce5af3b3a8e798ee0a3cfbf59f263c7d6b0f45d4b4a0ed"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.320116 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l8pr7" event={"ID":"984302b1-545d-474c-a808-8c8f716e580e","Type":"ContainerStarted","Data":"ebae5076964f25acb41eaeb433e4308b9c0b30fbb922d93128a427c221eb874d"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.320191 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nc7b9"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.325221 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.332572 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" event={"ID":"9b494454-5efb-466f-81bd-754f7d6fa0a8","Type":"ContainerStarted","Data":"1524098063fb352e09c217245a846501f4dac58d17b40881f8a7facd19b78853"} Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.335996 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:02.835974308 +0000 UTC m=+144.563678398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.335995 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.336400 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" event={"ID":"8b3dc29a-edba-48bc-823b-33b792856873","Type":"ContainerStarted","Data":"11070fbc8db6773482100ef0c5f27a27d7f418d89908b236e65ee19d59c54ed6"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.336880 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.337283 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:02.837268801 +0000 UTC m=+144.564972881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.340034 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.342695 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.343049 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" event={"ID":"f4ea70ef-743e-44ef-804c-2f1321999baa","Type":"ContainerStarted","Data":"800b915b8f6df0c4b29fd6fbcc1d95d19b095de634f9f8ea8b178e5c698c4d21"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.345001 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" event={"ID":"0ed74e60-8c19-47d2-b760-a6f8678f38da","Type":"ContainerStarted","Data":"751d6cef6fd428c1e86b566db08e6c7860cd5d2215e658c684b2946ad52f3bbc"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.345189 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.346311 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" event={"ID":"6ae30939-0d1c-4856-86e0-2b0b4797fa6a","Type":"ContainerStarted","Data":"d040bb05b48528b220a1e1d086a8c99ad9feb7e262ee2d1740e5c6b5b5d2e203"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.347347 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.351060 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" event={"ID":"c3bb2fc9-822c-4f53-98bf-70933744cf7f","Type":"ContainerStarted","Data":"52db1346d3e5c28577ccc9c9f0dc56c1b32dc4dc3a9883af1710dab9823da00c"} Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.351463 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gbpm4"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.353606 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x69t8"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.355164 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.358694 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9hdml"] Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.360723 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wzc5h" podStartSLOduration=124.360705314 podStartE2EDuration="2m4.360705314s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:02.345189601 +0000 UTC m=+144.072893691" watchObservedRunningTime="2026-02-18 11:39:02.360705314 +0000 UTC m=+144.088409394" Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.360828 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ppzj4"] Feb 18 11:39:02 crc kubenswrapper[4922]: W0218 11:39:02.403044 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa233e7a_8a71_495c_b696_2f3dac9f0ada.slice/crio-0d0074ed8642e690505cb0bcb8a3858df0eaa0a88849d13791f2daa4f4d6c521 WatchSource:0}: Error finding container 0d0074ed8642e690505cb0bcb8a3858df0eaa0a88849d13791f2daa4f4d6c521: Status 404 returned error can't find the container with id 0d0074ed8642e690505cb0bcb8a3858df0eaa0a88849d13791f2daa4f4d6c521 Feb 18 11:39:02 crc kubenswrapper[4922]: W0218 11:39:02.404243 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod913f9471_59b8_4494_964d_0db4086d77ab.slice/crio-2bb6698dc82485875bc82a38c85082c1a114762f3bc0714cd24ce971fd284aa0 WatchSource:0}: Error finding container 2bb6698dc82485875bc82a38c85082c1a114762f3bc0714cd24ce971fd284aa0: Status 404 returned error can't find the container with id 2bb6698dc82485875bc82a38c85082c1a114762f3bc0714cd24ce971fd284aa0 Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.406235 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-nfn89" podStartSLOduration=124.406212496 podStartE2EDuration="2m4.406212496s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:02.402312387 +0000 UTC m=+144.130016477" watchObservedRunningTime="2026-02-18 11:39:02.406212496 +0000 UTC m=+144.133916576" Feb 18 11:39:02 crc kubenswrapper[4922]: W0218 11:39:02.406503 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd17bc269_b566_4738_ac8f_354d91dd9245.slice/crio-4fcfedfe43678c41deb2499360ddc0b5661080187a21aa2887eaac9309b75ab1 WatchSource:0}: Error finding container 4fcfedfe43678c41deb2499360ddc0b5661080187a21aa2887eaac9309b75ab1: Status 404 returned error can't find the container with id 4fcfedfe43678c41deb2499360ddc0b5661080187a21aa2887eaac9309b75ab1 Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.438962 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.440150 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:02.940122894 +0000 UTC m=+144.667826974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: W0218 11:39:02.450242 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod817e4164_fa3e_41e5_8638_6a512b9d28bf.slice/crio-c2133f63c30ca179015d02d1491013526e0b12ae60dccf733560ec7c5697dfc0 WatchSource:0}: Error finding container c2133f63c30ca179015d02d1491013526e0b12ae60dccf733560ec7c5697dfc0: Status 404 returned error can't find the container with id c2133f63c30ca179015d02d1491013526e0b12ae60dccf733560ec7c5697dfc0 Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.456969 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-78x9f" podStartSLOduration=124.45693924 podStartE2EDuration="2m4.45693924s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:02.455291808 +0000 UTC m=+144.182995888" watchObservedRunningTime="2026-02-18 11:39:02.45693924 +0000 UTC m=+144.184643310" Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.483256 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ktkz9" podStartSLOduration=123.483234586 podStartE2EDuration="2m3.483234586s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:02.481979864 +0000 UTC m=+144.209683944" watchObservedRunningTime="2026-02-18 11:39:02.483234586 +0000 UTC m=+144.210938666" Feb 18 11:39:02 crc kubenswrapper[4922]: W0218 11:39:02.488545 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5566256e_1d22_41b3_8c9b_5765acbf0425.slice/crio-a2d8c374f7618e73ccd087e9db919056d5bd8db50ea9c2df381591c83f13763a WatchSource:0}: Error finding container a2d8c374f7618e73ccd087e9db919056d5bd8db50ea9c2df381591c83f13763a: Status 404 returned error can't find the container with id a2d8c374f7618e73ccd087e9db919056d5bd8db50ea9c2df381591c83f13763a Feb 18 11:39:02 crc kubenswrapper[4922]: W0218 11:39:02.496371 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd523bc23_dd6a_4d1f_b72b_2070ecce0cde.slice/crio-4e29e2617ea27de1fa522ac06b00cc6cbffc2ee061ea2183412c95fbb4f0c8ef WatchSource:0}: Error finding container 4e29e2617ea27de1fa522ac06b00cc6cbffc2ee061ea2183412c95fbb4f0c8ef: Status 404 returned error can't find the container with id 4e29e2617ea27de1fa522ac06b00cc6cbffc2ee061ea2183412c95fbb4f0c8ef Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.528841 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" podStartSLOduration=124.52882666 podStartE2EDuration="2m4.52882666s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:02.498756019 +0000 UTC m=+144.226460119" watchObservedRunningTime="2026-02-18 11:39:02.52882666 +0000 UTC m=+144.256530740" Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.542334 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.542755 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.042740412 +0000 UTC m=+144.770444492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.643419 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.643720 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.143694897 +0000 UTC m=+144.871398977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.643854 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.644244 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.144237471 +0000 UTC m=+144.871941551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.718507 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.719071 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.723561 4922 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ks48g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.723620 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" podUID="8719fb44-5fea-4fd5-a516-5d2ab11c221c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.744812 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.744932 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.244913788 +0000 UTC m=+144.972617868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.745167 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.745552 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.245537364 +0000 UTC m=+144.973241444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.846682 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.846876 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.346846598 +0000 UTC m=+145.074550698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.847171 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.847856 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.347832843 +0000 UTC m=+145.075536923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.948070 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.948387 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.448327477 +0000 UTC m=+145.176031577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:02 crc kubenswrapper[4922]: I0218 11:39:02.948464 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:02 crc kubenswrapper[4922]: E0218 11:39:02.948896 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.448872641 +0000 UTC m=+145.176576721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.049806 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.049957 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.549927399 +0000 UTC m=+145.277631489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.050175 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.050507 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.550493703 +0000 UTC m=+145.278197783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.116127 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.118290 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.118361 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.151780 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.151997 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.651965331 +0000 UTC m=+145.379669431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.152088 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.152468 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.652450174 +0000 UTC m=+145.380154254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.253623 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.254076 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.754058135 +0000 UTC m=+145.481762215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.355048 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.355459 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.855441652 +0000 UTC m=+145.583145802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.381552 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" event={"ID":"aa233e7a-8a71-495c-b696-2f3dac9f0ada","Type":"ContainerStarted","Data":"0d0074ed8642e690505cb0bcb8a3858df0eaa0a88849d13791f2daa4f4d6c521"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.393627 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" event={"ID":"4891f319-eff4-4b7f-912e-45da55cb4fc2","Type":"ContainerStarted","Data":"d1e042b27b59a1da20d4078cfed6a004e18aa6f2f57bdd3d14ddd84d243969c1"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.397264 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" event={"ID":"adf4d88c-a19b-49bf-bb62-eef23b55efae","Type":"ContainerStarted","Data":"c1d51ad68ed921dd436eee0cb365468fc743537c75614055fa9ee60ceca22696"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.401711 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" event={"ID":"afb43c7e-87bc-4450-ad81-6a22161fb794","Type":"ContainerStarted","Data":"a3c90b7d6bacd996ab468020ba8edd7eeb8331165cdebe9add803d87729721bb"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.412934 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sddqb" event={"ID":"ca3008b0-2ba6-4dfd-9fea-d1890e2af197","Type":"ContainerStarted","Data":"9ce9c276a8e5c26decc1b7c4d95b0fd62537bd8c363cb3b02b87cc2632007a2d"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.413203 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.414530 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" event={"ID":"027da92d-9293-48ea-bd00-47b0fcb186fd","Type":"ContainerStarted","Data":"65d330d319f9449b58d15169f88ad7a6acb8ec0fc3bf9876afbb115c7b1085b6"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.415595 4922 patch_prober.go:28] interesting pod/console-operator-58897d9998-sddqb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.415649 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sddqb" podUID="ca3008b0-2ba6-4dfd-9fea-d1890e2af197" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.418287 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-b6dxx" event={"ID":"48dabf7e-d1d7-48b6-bc70-5cc88cdcf994","Type":"ContainerStarted","Data":"2c993aa4181e3c591333184139c4fdd1e2fe5f505f42b28f05ca8cb6ce671444"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.422485 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-b6dxx" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.426693 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fs2lx" podStartSLOduration=125.426668115 podStartE2EDuration="2m5.426668115s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.422661353 +0000 UTC m=+145.150365433" watchObservedRunningTime="2026-02-18 11:39:03.426668115 +0000 UTC m=+145.154372215" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.435085 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-b6dxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.435129 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b6dxx" podUID="48dabf7e-d1d7-48b6-bc70-5cc88cdcf994" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.435349 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" event={"ID":"c3bb2fc9-822c-4f53-98bf-70933744cf7f","Type":"ContainerStarted","Data":"6194b5e6d35bc369d7c503f0dab1c1b4699e56c00e185999f0dd2d244bb4e4b6"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.436191 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.442501 4922 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-v6m8j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.442556 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" podUID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.459736 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.460165 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:03.960149492 +0000 UTC m=+145.687853572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.478589 4922 generic.go:334] "Generic (PLEG): container finished" podID="2f7958cf-7c2d-4c29-bea8-5871267d5e16" containerID="b07cf4437cbc267ee1a52bb7027f8b8ea0689cc07e6154dc429ff4f87cfa0e5a" exitCode=0 Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.478654 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" event={"ID":"2f7958cf-7c2d-4c29-bea8-5871267d5e16","Type":"ContainerDied","Data":"b07cf4437cbc267ee1a52bb7027f8b8ea0689cc07e6154dc429ff4f87cfa0e5a"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.479720 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sddqb" podStartSLOduration=125.479702637 podStartE2EDuration="2m5.479702637s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.478010384 +0000 UTC m=+145.205714464" watchObservedRunningTime="2026-02-18 11:39:03.479702637 +0000 UTC m=+145.207406717" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.480897 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-b6dxx" podStartSLOduration=125.480890437 podStartE2EDuration="2m5.480890437s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.459899946 +0000 UTC m=+145.187604026" watchObservedRunningTime="2026-02-18 11:39:03.480890437 +0000 UTC m=+145.208594517" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.483908 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l8pr7" event={"ID":"984302b1-545d-474c-a808-8c8f716e580e","Type":"ContainerStarted","Data":"5b4fc0ef007d316fad1dbc341ba002a0863284420b1ba189011397c38969afc2"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.492062 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" event={"ID":"642557ec-2e08-451b-8a4c-b4e8cf88f048","Type":"ContainerStarted","Data":"20227f686332cad34e3df8e5a3e855e1a786f126dd7a8186dccb49aa54c66573"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.496828 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" podStartSLOduration=124.49681159 podStartE2EDuration="2m4.49681159s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.493529477 +0000 UTC m=+145.221233557" watchObservedRunningTime="2026-02-18 11:39:03.49681159 +0000 UTC m=+145.224515670" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.498887 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" event={"ID":"8890a31d-16c4-4c0a-a1f8-4ce9d3314a80","Type":"ContainerStarted","Data":"aedf250be9431181205f229da7e261f45b7d32d0337001be99513451534925aa"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.500634 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" event={"ID":"5566256e-1d22-41b3-8c9b-5765acbf0425","Type":"ContainerStarted","Data":"a2d8c374f7618e73ccd087e9db919056d5bd8db50ea9c2df381591c83f13763a"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.502037 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" event={"ID":"8b3dc29a-edba-48bc-823b-33b792856873","Type":"ContainerStarted","Data":"8af51f2408c0810de69fc4521d83e2110bcc108ad7f3e68b264c01bcc3ce5b25"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.504064 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" event={"ID":"75c707c4-5c62-438f-8312-2307d3ef0ba8","Type":"ContainerStarted","Data":"28c2cf88217349a49db593635ea8e128208ef1ae24d7cc6d1020cc30632765bf"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.506290 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" event={"ID":"a768634b-1586-4ba2-9a05-6a88f5befea1","Type":"ContainerStarted","Data":"24875c03230fe7b07f2bf2b1aeacb99091fe584af6a63a16379985eff3253dce"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.512866 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-l8pr7" podStartSLOduration=6.512843586 podStartE2EDuration="6.512843586s" podCreationTimestamp="2026-02-18 11:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.509671985 +0000 UTC m=+145.237376065" watchObservedRunningTime="2026-02-18 11:39:03.512843586 +0000 UTC m=+145.240547666" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.514091 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" event={"ID":"817e4164-fa3e-41e5-8638-6a512b9d28bf","Type":"ContainerStarted","Data":"c2133f63c30ca179015d02d1491013526e0b12ae60dccf733560ec7c5697dfc0"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.520001 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4ea70ef-743e-44ef-804c-2f1321999baa" containerID="800b915b8f6df0c4b29fd6fbcc1d95d19b095de634f9f8ea8b178e5c698c4d21" exitCode=0 Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.520088 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" event={"ID":"f4ea70ef-743e-44ef-804c-2f1321999baa","Type":"ContainerDied","Data":"800b915b8f6df0c4b29fd6fbcc1d95d19b095de634f9f8ea8b178e5c698c4d21"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.535632 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbpm4" event={"ID":"d523bc23-dd6a-4d1f-b72b-2070ecce0cde","Type":"ContainerStarted","Data":"4e29e2617ea27de1fa522ac06b00cc6cbffc2ee061ea2183412c95fbb4f0c8ef"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.549063 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" event={"ID":"913f9471-59b8-4494-964d-0db4086d77ab","Type":"ContainerStarted","Data":"2bb6698dc82485875bc82a38c85082c1a114762f3bc0714cd24ce971fd284aa0"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.572025 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.574487 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.074467056 +0000 UTC m=+145.802171136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.579294 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" event={"ID":"a62974f3-68b4-451d-9887-bf8af554ace0","Type":"ContainerStarted","Data":"9f1463891f17f48341e02c80d813b5f3c7115df8d26a1a8be676f77303b3e781"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.601145 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nbzck" podStartSLOduration=125.60112884 podStartE2EDuration="2m5.60112884s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.5651588 +0000 UTC m=+145.292862880" watchObservedRunningTime="2026-02-18 11:39:03.60112884 +0000 UTC m=+145.328832920" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.628152 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" podStartSLOduration=124.628136144 podStartE2EDuration="2m4.628136144s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.602079634 +0000 UTC m=+145.329783734" watchObservedRunningTime="2026-02-18 11:39:03.628136144 +0000 UTC m=+145.355840224" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.634403 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" event={"ID":"9e36551d-13cd-4a75-a29b-658850b46cb8","Type":"ContainerStarted","Data":"6267ab86fea6d91bef9e1d2fb055261c2673103810304b496229122d5cfe0a9c"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.635566 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.648899 4922 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-q7mwg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.648939 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.653161 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xvfbm" podStartSLOduration=125.653146237 podStartE2EDuration="2m5.653146237s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.630598896 +0000 UTC m=+145.358302976" watchObservedRunningTime="2026-02-18 11:39:03.653146237 +0000 UTC m=+145.380850317" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.672069 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" podStartSLOduration=125.672054256 podStartE2EDuration="2m5.672054256s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.671653686 +0000 UTC m=+145.399357766" watchObservedRunningTime="2026-02-18 11:39:03.672054256 +0000 UTC m=+145.399758336" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.678605 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" event={"ID":"d17bc269-b566-4738-ac8f-354d91dd9245","Type":"ContainerStarted","Data":"4fcfedfe43678c41deb2499360ddc0b5661080187a21aa2887eaac9309b75ab1"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.684317 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9hdml" event={"ID":"66dfc07c-fa4c-48ad-9904-2a767310c6ac","Type":"ContainerStarted","Data":"8b504b001cdb830339dab531348794aef1e7dff43d9a2dc9460db743d8768619"} Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.685912 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.686439 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.186415479 +0000 UTC m=+145.914119559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.686709 4922 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pj7qx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.686744 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" podUID="158f0672-c017-4e45-a564-96de81f21772" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.687007 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.687345 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.187334772 +0000 UTC m=+145.915038862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.714268 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" podStartSLOduration=125.714248924 podStartE2EDuration="2m5.714248924s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.70621746 +0000 UTC m=+145.433921540" watchObservedRunningTime="2026-02-18 11:39:03.714248924 +0000 UTC m=+145.441953014" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.742635 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9hdml" podStartSLOduration=6.742605531 podStartE2EDuration="6.742605531s" podCreationTimestamp="2026-02-18 11:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:03.736785104 +0000 UTC m=+145.464489184" watchObservedRunningTime="2026-02-18 11:39:03.742605531 +0000 UTC m=+145.470309621" Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.788712 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.788917 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.288891673 +0000 UTC m=+146.016595753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.794691 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.794940 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.294928156 +0000 UTC m=+146.022632336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.896296 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.896893 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.396869076 +0000 UTC m=+146.124573156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.897039 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.897396 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.397358148 +0000 UTC m=+146.125062228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.998677 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.998869 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.498828077 +0000 UTC m=+146.226532167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:03 crc kubenswrapper[4922]: I0218 11:39:03.999245 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:03 crc kubenswrapper[4922]: E0218 11:39:03.999601 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.499587736 +0000 UTC m=+146.227291806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.100706 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.101156 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.601133476 +0000 UTC m=+146.328837556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.101232 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.101627 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.601598718 +0000 UTC m=+146.329302798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.116854 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.116926 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.201914 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.202163 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.702139653 +0000 UTC m=+146.429843733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.202807 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.203188 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.703177209 +0000 UTC m=+146.430881279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.303766 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.304048 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.804034252 +0000 UTC m=+146.531738332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.405752 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.406237 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:04.906221699 +0000 UTC m=+146.633925789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.506751 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.507068 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.007052361 +0000 UTC m=+146.734756431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.608354 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.608738 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.108727525 +0000 UTC m=+146.836431605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.694016 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" event={"ID":"a62974f3-68b4-451d-9887-bf8af554ace0","Type":"ContainerStarted","Data":"cb12ef1479e4563d313fa40b0a093d64397251fa35ae5fb84e06aafcade603d0"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.698645 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" event={"ID":"083b5af3-1602-4add-a778-86b19df106c2","Type":"ContainerStarted","Data":"65c4aea109955d848178629401d9f210c9f469ab62fc5a28d6987fbd54481f48"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.700712 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" event={"ID":"642557ec-2e08-451b-8a4c-b4e8cf88f048","Type":"ContainerStarted","Data":"786d71aa506311a7b9db900698bd3f7301429e811f1275733fe62ca5b6103707"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.700761 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" event={"ID":"642557ec-2e08-451b-8a4c-b4e8cf88f048","Type":"ContainerStarted","Data":"107db514ae864b4e2b106c6e0c6c97719e7ce4f85f0e80f5dad24f2f718e6206"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.702700 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" event={"ID":"5566256e-1d22-41b3-8c9b-5765acbf0425","Type":"ContainerStarted","Data":"8d388362a3e88a1d99161f2e4f8dbf961fa358c8bd007e5b64d3db3c3a1da0d6"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.702860 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.705571 4922 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-p4qtq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.705639 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" podUID="5566256e-1d22-41b3-8c9b-5765acbf0425" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.707556 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" event={"ID":"0ed74e60-8c19-47d2-b760-a6f8678f38da","Type":"ContainerStarted","Data":"3cb9a47c5c2c48cf77b858b29b12e1469109afb2aca6b57d27222f7ff280ea0f"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.708813 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.708928 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" event={"ID":"75c707c4-5c62-438f-8312-2307d3ef0ba8","Type":"ContainerStarted","Data":"28b5d335d68c0542326e75d8138276312eb3264af281f35e13ca715cee393fd1"} Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.708985 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.208964142 +0000 UTC m=+146.936668222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.709206 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.709507 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.209488375 +0000 UTC m=+146.937192455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.710334 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" event={"ID":"2787200d-e2f9-477b-bb3c-c1c40201f13a","Type":"ContainerStarted","Data":"c51ccf08396fb2bb9a58735128459ae1a04af5bddea74108676e1ea98d280471"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.712356 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" event={"ID":"f4ea70ef-743e-44ef-804c-2f1321999baa","Type":"ContainerStarted","Data":"856a19ea3f6bb5c476999f4118a140582f9a1373a71996bd3be3685b50fae44b"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.712435 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.715307 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9hdml" event={"ID":"66dfc07c-fa4c-48ad-9904-2a767310c6ac","Type":"ContainerStarted","Data":"d1ca058a4ce2873007f1c7261ff2f0f31984ceb9aba5a5bba09875c116ebba39"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.717517 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" event={"ID":"913f9471-59b8-4494-964d-0db4086d77ab","Type":"ContainerStarted","Data":"d10367914cd300d01c9b56f2bbd95d902130de10c973ab9d29618714cacdeb85"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.717702 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.720716 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-ppzj4" podStartSLOduration=125.720695329 podStartE2EDuration="2m5.720695329s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:04.718602546 +0000 UTC m=+146.446306626" watchObservedRunningTime="2026-02-18 11:39:04.720695329 +0000 UTC m=+146.448399409" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.721256 4922 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vgs8b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.721301 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" podUID="913f9471-59b8-4494-964d-0db4086d77ab" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.722132 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-prk5g" event={"ID":"6ae30939-0d1c-4856-86e0-2b0b4797fa6a","Type":"ContainerStarted","Data":"3517a33c4999f050839ab57c18af1c001b2c34efea6e1a5b34392e4587e073de"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.724820 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbpm4" event={"ID":"d523bc23-dd6a-4d1f-b72b-2070ecce0cde","Type":"ContainerStarted","Data":"7431926b174e82de716da14c7c22b82e49895f227151ee5e2f7edef9d6ab1ffb"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.724857 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gbpm4" event={"ID":"d523bc23-dd6a-4d1f-b72b-2070ecce0cde","Type":"ContainerStarted","Data":"0cb3d1345b998746e1242f7bfb1c40974d53b3289c499e13bddda8a47a7ae977"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.724952 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.726246 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" event={"ID":"aa233e7a-8a71-495c-b696-2f3dac9f0ada","Type":"ContainerStarted","Data":"47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.726482 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.728017 4922 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nc7b9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.728078 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" podUID="aa233e7a-8a71-495c-b696-2f3dac9f0ada" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.728315 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" event={"ID":"817e4164-fa3e-41e5-8638-6a512b9d28bf","Type":"ContainerStarted","Data":"129c2315b1a47a4586de5e9f9f7716909cf67b9add10228f2cffcf3c179dd78c"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.731021 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-js5gq" event={"ID":"69fd90c9-8767-4d22-b88e-33fafd8026d8","Type":"ContainerStarted","Data":"bf73eb69e2451fbf2587410449a76eab593426f6282bcf400200cd18ed5415d9"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.733202 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" event={"ID":"faaf8fb4-0dba-494d-8a14-2dba7901f50a","Type":"ContainerStarted","Data":"720b2484c75640f7273395f1c0aceb8c1aaf51e7b3b5916211b28cfe7c6147e9"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.735421 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" event={"ID":"374ac04a-b37d-42c8-b0ca-e2647c86bc74","Type":"ContainerStarted","Data":"463713d8f98bc911518d2ca27eeab62e0684e10e9b167f050739a8e084663469"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.735465 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" event={"ID":"374ac04a-b37d-42c8-b0ca-e2647c86bc74","Type":"ContainerStarted","Data":"a4a54a9edd61103198db5dfd95b5ce19026c37e7687af7267aabb69845ac249a"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.736768 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" event={"ID":"4891f319-eff4-4b7f-912e-45da55cb4fc2","Type":"ContainerStarted","Data":"9404f5e2ae4aabd3d9991c6817b693cd457683a6951ce280ea0524cf0d419898"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.736979 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.739193 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" event={"ID":"d17bc269-b566-4738-ac8f-354d91dd9245","Type":"ContainerStarted","Data":"fe3960cce94d066ad32220e4455ff52bcf307ee9a5bcb70c93c7628930d14353"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.739263 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" event={"ID":"d17bc269-b566-4738-ac8f-354d91dd9245","Type":"ContainerStarted","Data":"fd07afdb87bdbec61651ca48b9f569e1f81df8b8a6b58534d3247942100d567d"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.739576 4922 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zm5zz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.739597 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.739615 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" podUID="4891f319-eff4-4b7f-912e-45da55cb4fc2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.740835 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" event={"ID":"adf4d88c-a19b-49bf-bb62-eef23b55efae","Type":"ContainerStarted","Data":"2d581da97420156437790faed18ade975933011c52eec00dca326338151b817d"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.742869 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" event={"ID":"353bd1c5-bab8-42cc-925a-d9776ac60b6b","Type":"ContainerStarted","Data":"6f1e7be260dd556cd0986477cadcd0f657832041dd39aa3f2efa9d0308fdf348"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.745813 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" event={"ID":"9b494454-5efb-466f-81bd-754f7d6fa0a8","Type":"ContainerStarted","Data":"f1168c8c54acf15fffb4b9d435d37323d0c717516e4f3a062f239f98765b2216"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.745856 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" event={"ID":"9b494454-5efb-466f-81bd-754f7d6fa0a8","Type":"ContainerStarted","Data":"a716f7cc11ffb59f882c129b969ab905c12ed0dafdf8d34ce18a47de24d232c5"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.749443 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" event={"ID":"2f7958cf-7c2d-4c29-bea8-5871267d5e16","Type":"ContainerStarted","Data":"c84ab4747087531677041b9910125bba0d9c74d28809f960eb9f4f401b7180fe"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.751461 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" event={"ID":"027da92d-9293-48ea-bd00-47b0fcb186fd","Type":"ContainerStarted","Data":"cf5ac84c4abc58bebbc7b8c441d8ff470b1e0e95b354d2cfa1151169bc01b6e2"} Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.752218 4922 patch_prober.go:28] interesting pod/console-operator-58897d9998-sddqb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.752330 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sddqb" podUID="ca3008b0-2ba6-4dfd-9fea-d1890e2af197" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.752489 4922 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-q7mwg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.752545 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.752709 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-b6dxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.752754 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b6dxx" podUID="48dabf7e-d1d7-48b6-bc70-5cc88cdcf994" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.756303 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gst7r" podStartSLOduration=125.756273479 podStartE2EDuration="2m5.756273479s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:04.75591291 +0000 UTC m=+146.483617000" watchObservedRunningTime="2026-02-18 11:39:04.756273479 +0000 UTC m=+146.483977569" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.810583 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.810763 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.310722187 +0000 UTC m=+147.038426267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.811498 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.813097 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.313079807 +0000 UTC m=+147.040783997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.849078 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" podStartSLOduration=125.849046887 podStartE2EDuration="2m5.849046887s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:04.794596499 +0000 UTC m=+146.522300579" watchObservedRunningTime="2026-02-18 11:39:04.849046887 +0000 UTC m=+146.576750957" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.909201 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rwxzh" podStartSLOduration=125.90917968 podStartE2EDuration="2m5.90917968s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:04.864907289 +0000 UTC m=+146.592611379" watchObservedRunningTime="2026-02-18 11:39:04.90917968 +0000 UTC m=+146.636883760" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.910617 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" podStartSLOduration=126.910608816 podStartE2EDuration="2m6.910608816s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:04.909653422 +0000 UTC m=+146.637357502" watchObservedRunningTime="2026-02-18 11:39:04.910608816 +0000 UTC m=+146.638312896" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.913849 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:04 crc kubenswrapper[4922]: E0218 11:39:04.915097 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.415073379 +0000 UTC m=+147.142777529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.986818 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r84r2" podStartSLOduration=125.986804544 podStartE2EDuration="2m5.986804544s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:04.986380794 +0000 UTC m=+146.714084874" watchObservedRunningTime="2026-02-18 11:39:04.986804544 +0000 UTC m=+146.714508624" Feb 18 11:39:04 crc kubenswrapper[4922]: I0218 11:39:04.987614 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h8l7q" podStartSLOduration=125.987609325 podStartE2EDuration="2m5.987609325s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:04.948016923 +0000 UTC m=+146.675721003" watchObservedRunningTime="2026-02-18 11:39:04.987609325 +0000 UTC m=+146.715313405" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.016112 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.016698 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.516684521 +0000 UTC m=+147.244388601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.022285 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" podStartSLOduration=126.022264562 podStartE2EDuration="2m6.022264562s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.020652991 +0000 UTC m=+146.748357071" watchObservedRunningTime="2026-02-18 11:39:05.022264562 +0000 UTC m=+146.749968642" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.058522 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" podStartSLOduration=127.058508229 podStartE2EDuration="2m7.058508229s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.057824592 +0000 UTC m=+146.785528672" watchObservedRunningTime="2026-02-18 11:39:05.058508229 +0000 UTC m=+146.786212299" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.072830 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.073453 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.075534 4922 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-vhdd8 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.32:8443/livez\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.075594 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" podUID="2f7958cf-7c2d-4c29-bea8-5871267d5e16" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.32:8443/livez\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.100786 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" podStartSLOduration=126.100767289 podStartE2EDuration="2m6.100767289s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.098759258 +0000 UTC m=+146.826463338" watchObservedRunningTime="2026-02-18 11:39:05.100767289 +0000 UTC m=+146.828471369" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.117433 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.117838 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.617822981 +0000 UTC m=+147.345527061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.126046 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:05 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:05 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:05 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.126106 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.212838 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" podStartSLOduration=126.212822675 podStartE2EDuration="2m6.212822675s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.208857425 +0000 UTC m=+146.936561505" watchObservedRunningTime="2026-02-18 11:39:05.212822675 +0000 UTC m=+146.940526755" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.213601 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm5bk" podStartSLOduration=126.213593775 podStartE2EDuration="2m6.213593775s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.145930912 +0000 UTC m=+146.873634992" watchObservedRunningTime="2026-02-18 11:39:05.213593775 +0000 UTC m=+146.941297855" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.221144 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.221437 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.721426193 +0000 UTC m=+147.449130273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.322903 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.323247 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.82323399 +0000 UTC m=+147.550938070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.359542 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" podStartSLOduration=126.359522078 podStartE2EDuration="2m6.359522078s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.316593322 +0000 UTC m=+147.044297392" watchObservedRunningTime="2026-02-18 11:39:05.359522078 +0000 UTC m=+147.087226158" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.397222 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" podStartSLOduration=126.397200182 podStartE2EDuration="2m6.397200182s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.361618521 +0000 UTC m=+147.089322601" watchObservedRunningTime="2026-02-18 11:39:05.397200182 +0000 UTC m=+147.124904262" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.398262 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wdprh" podStartSLOduration=127.398254509 podStartE2EDuration="2m7.398254509s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.397689775 +0000 UTC m=+147.125393855" watchObservedRunningTime="2026-02-18 11:39:05.398254509 +0000 UTC m=+147.125958589" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.430200 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.431179 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:05.931167462 +0000 UTC m=+147.658871542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.465615 4922 csr.go:261] certificate signing request csr-6t8gk is approved, waiting to be issued Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.487326 4922 csr.go:257] certificate signing request csr-6t8gk is issued Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.504235 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9kz6f" podStartSLOduration=126.504216941 podStartE2EDuration="2m6.504216941s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.441704039 +0000 UTC m=+147.169408129" watchObservedRunningTime="2026-02-18 11:39:05.504216941 +0000 UTC m=+147.231921021" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.532159 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.532655 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.03263579 +0000 UTC m=+147.760339870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.550937 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gbpm4" podStartSLOduration=8.550922563 podStartE2EDuration="8.550922563s" podCreationTimestamp="2026-02-18 11:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.546345547 +0000 UTC m=+147.274049627" watchObservedRunningTime="2026-02-18 11:39:05.550922563 +0000 UTC m=+147.278626643" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.551230 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jkv6t" podStartSLOduration=126.551225971 podStartE2EDuration="2m6.551225971s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.506197471 +0000 UTC m=+147.233901551" watchObservedRunningTime="2026-02-18 11:39:05.551225971 +0000 UTC m=+147.278930051" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.575669 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9b55m" podStartSLOduration=127.575647349 podStartE2EDuration="2m7.575647349s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.568722214 +0000 UTC m=+147.296426294" watchObservedRunningTime="2026-02-18 11:39:05.575647349 +0000 UTC m=+147.303351449" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.598343 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v4cwm" podStartSLOduration=126.598329783 podStartE2EDuration="2m6.598329783s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.596765553 +0000 UTC m=+147.324469633" watchObservedRunningTime="2026-02-18 11:39:05.598329783 +0000 UTC m=+147.326033853" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.621202 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-b6msr" podStartSLOduration=126.621185372 podStartE2EDuration="2m6.621185372s" podCreationTimestamp="2026-02-18 11:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:05.620020322 +0000 UTC m=+147.347724402" watchObservedRunningTime="2026-02-18 11:39:05.621185372 +0000 UTC m=+147.348889452" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.635248 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.635703 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.135688769 +0000 UTC m=+147.863392849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.651710 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.739922 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.740470 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.24045161 +0000 UTC m=+147.968155690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.795355 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" event={"ID":"a768634b-1586-4ba2-9a05-6a88f5befea1","Type":"ContainerStarted","Data":"72bb5869f6a6496f8b63fd6ed562f4bb406d18a6d8ad2d2971ddbfd870acaf55"} Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.796717 4922 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nc7b9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.796763 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" podUID="aa233e7a-8a71-495c-b696-2f3dac9f0ada" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.819701 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-p4qtq" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.848240 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.848363 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zm5zz" Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.848659 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.348643399 +0000 UTC m=+148.076347479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.949091 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:05 crc kubenswrapper[4922]: I0218 11:39:05.969796 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:39:05 crc kubenswrapper[4922]: E0218 11:39:05.970438 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.470400411 +0000 UTC m=+148.198104491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.077440 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.077913 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.577898392 +0000 UTC m=+148.305602472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.122793 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:06 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:06 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:06 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.122857 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.178744 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.178966 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.678925939 +0000 UTC m=+148.406630019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.179215 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.179696 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.679674048 +0000 UTC m=+148.407378128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.280541 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.280964 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.780897489 +0000 UTC m=+148.508601569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.281099 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.281534 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.781518365 +0000 UTC m=+148.509222445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.382059 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.382286 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.882245524 +0000 UTC m=+148.609949604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.382754 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.383074 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.883059705 +0000 UTC m=+148.610763785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.484270 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.484703 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.984668767 +0000 UTC m=+148.712372857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.485584 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.486091 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:06.986078892 +0000 UTC m=+148.713782972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.488210 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 11:34:05 +0000 UTC, rotation deadline is 2026-11-01 11:09:18.354946646 +0000 UTC Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.488253 4922 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6143h30m11.866696429s for next certificate rotation Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.586923 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.587231 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.087216062 +0000 UTC m=+148.814920142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.587588 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.587889 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.087881399 +0000 UTC m=+148.815585479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.666942 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vgs8b" Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.688852 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.689109 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.189092491 +0000 UTC m=+148.916796571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.689854 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.18984601 +0000 UTC m=+148.917550090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.689965 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.791833 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.792029 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.292000316 +0000 UTC m=+149.019704396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.792873 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.793348 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.29333363 +0000 UTC m=+149.021037710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.802186 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" event={"ID":"a768634b-1586-4ba2-9a05-6a88f5befea1","Type":"ContainerStarted","Data":"cdd3dc5fb1f6da55feebfd1371f0fa68eee2082cb297235c6c7633a91ebfc381"} Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.893930 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.894295 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.394279385 +0000 UTC m=+149.121983465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.894635 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.894931 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.394923991 +0000 UTC m=+149.122628071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.920347 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5lflw"] Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.921238 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.923350 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.941619 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5lflw"] Feb 18 11:39:06 crc kubenswrapper[4922]: I0218 11:39:06.996185 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:06 crc kubenswrapper[4922]: E0218 11:39:06.997455 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.497432476 +0000 UTC m=+149.225136626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.068967 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7dzbt"] Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.069977 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.072067 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.089284 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dzbt"] Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.098004 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-catalog-content\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.098076 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5ts\" (UniqueName: \"kubernetes.io/projected/9cddee0a-8b13-429b-89b6-e820f8f3ec59-kube-api-access-6x5ts\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.098278 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.098325 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-utilities\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.098645 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.598633807 +0000 UTC m=+149.326337877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.119367 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:07 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:07 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:07 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.119424 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.202941 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.203193 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-catalog-content\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.203258 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-utilities\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.203326 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-catalog-content\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.203355 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-utilities\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.203428 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6nbf\" (UniqueName: \"kubernetes.io/projected/fe4edbcb-8a38-4f30-975f-aa4825192b4e-kube-api-access-k6nbf\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.203462 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5ts\" (UniqueName: \"kubernetes.io/projected/9cddee0a-8b13-429b-89b6-e820f8f3ec59-kube-api-access-6x5ts\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.203841 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.70382776 +0000 UTC m=+149.431531840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.204551 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-utilities\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.204769 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-catalog-content\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.234244 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5ts\" (UniqueName: \"kubernetes.io/projected/9cddee0a-8b13-429b-89b6-e820f8f3ec59-kube-api-access-6x5ts\") pod \"community-operators-5lflw\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.251676 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.283380 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lcnjk"] Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.284727 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.302827 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcnjk"] Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.308494 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-utilities\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.308562 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6nbf\" (UniqueName: \"kubernetes.io/projected/fe4edbcb-8a38-4f30-975f-aa4825192b4e-kube-api-access-k6nbf\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.308614 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-catalog-content\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.308644 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.308957 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.808945631 +0000 UTC m=+149.536649711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.309413 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-utilities\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.309904 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-catalog-content\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.363123 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6nbf\" (UniqueName: \"kubernetes.io/projected/fe4edbcb-8a38-4f30-975f-aa4825192b4e-kube-api-access-k6nbf\") pod \"certified-operators-7dzbt\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.383653 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.409832 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.410041 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvvjz\" (UniqueName: \"kubernetes.io/projected/fc9b41f8-ac9b-4166-a2a6-80326e19254a-kube-api-access-fvvjz\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.410072 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-utilities\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.410130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-catalog-content\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.410256 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:07.910242535 +0000 UTC m=+149.637946615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.514263 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvvjz\" (UniqueName: \"kubernetes.io/projected/fc9b41f8-ac9b-4166-a2a6-80326e19254a-kube-api-access-fvvjz\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.514559 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-utilities\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.514621 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-catalog-content\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.514649 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.514951 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.014939855 +0000 UTC m=+149.742643935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.515660 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-utilities\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.515860 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-catalog-content\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.520558 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nrxb6"] Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.524670 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.569108 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrxb6"] Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.573417 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvvjz\" (UniqueName: \"kubernetes.io/projected/fc9b41f8-ac9b-4166-a2a6-80326e19254a-kube-api-access-fvvjz\") pod \"community-operators-lcnjk\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.618478 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.618795 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.118781843 +0000 UTC m=+149.846485923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.627650 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.720214 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tq9z\" (UniqueName: \"kubernetes.io/projected/48ccc67a-6393-4dea-9c00-24bbc55e34d3-kube-api-access-9tq9z\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.720274 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-catalog-content\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.720332 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-utilities\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.720477 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.720906 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.220890858 +0000 UTC m=+149.948594948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.740527 4922 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ks48g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]log ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]etcd ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/max-in-flight-filter ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 18 11:39:07 crc kubenswrapper[4922]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/project.openshift.io-projectcache ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/openshift.io-startinformers ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 18 11:39:07 crc kubenswrapper[4922]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 11:39:07 crc kubenswrapper[4922]: livez check failed Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.740585 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" podUID="8719fb44-5fea-4fd5-a516-5d2ab11c221c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.824027 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.824462 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tq9z\" (UniqueName: \"kubernetes.io/projected/48ccc67a-6393-4dea-9c00-24bbc55e34d3-kube-api-access-9tq9z\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.824537 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-catalog-content\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.824617 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-utilities\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.825861 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-utilities\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.825979 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.325955957 +0000 UTC m=+150.053660077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.827940 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-catalog-content\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.868466 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tq9z\" (UniqueName: \"kubernetes.io/projected/48ccc67a-6393-4dea-9c00-24bbc55e34d3-kube-api-access-9tq9z\") pod \"certified-operators-nrxb6\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.869970 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5lflw"] Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.874458 4922 generic.go:334] "Generic (PLEG): container finished" podID="75c707c4-5c62-438f-8312-2307d3ef0ba8" containerID="28b5d335d68c0542326e75d8138276312eb3264af281f35e13ca715cee393fd1" exitCode=0 Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.874587 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" event={"ID":"75c707c4-5c62-438f-8312-2307d3ef0ba8","Type":"ContainerDied","Data":"28b5d335d68c0542326e75d8138276312eb3264af281f35e13ca715cee393fd1"} Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.885873 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" event={"ID":"a768634b-1586-4ba2-9a05-6a88f5befea1","Type":"ContainerStarted","Data":"03e1a416ade5db9ec99c4b0589181f7a5bef5c1cac2f5083f1dbdcfa6f069bdb"} Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.885917 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" event={"ID":"a768634b-1586-4ba2-9a05-6a88f5befea1","Type":"ContainerStarted","Data":"7f104f486c331e04994cef74c68022853ba96007cd1988029409f1ba6aeaa07c"} Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.920189 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.930656 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:07 crc kubenswrapper[4922]: E0218 11:39:07.931669 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.431657023 +0000 UTC m=+150.159361103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:07 crc kubenswrapper[4922]: I0218 11:39:07.945233 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" podStartSLOduration=10.945216676 podStartE2EDuration="10.945216676s" podCreationTimestamp="2026-02-18 11:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:07.938966908 +0000 UTC m=+149.666670988" watchObservedRunningTime="2026-02-18 11:39:07.945216676 +0000 UTC m=+149.672920756" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.033110 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.033415 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.033454 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.033501 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:39:08 crc kubenswrapper[4922]: E0218 11:39:08.033617 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.533574272 +0000 UTC m=+150.261278352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.033825 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.033870 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:08 crc kubenswrapper[4922]: E0218 11:39:08.034379 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.534354442 +0000 UTC m=+150.262058522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.039066 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.048022 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.053876 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.059255 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.065866 4922 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.085874 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lcnjk"] Feb 18 11:39:08 crc kubenswrapper[4922]: W0218 11:39:08.105064 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc9b41f8_ac9b_4166_a2a6_80326e19254a.slice/crio-a8cd6fc2cc1529740a5644abc0364ded3c6ac51bb6647c92d79339c69415a299 WatchSource:0}: Error finding container a8cd6fc2cc1529740a5644abc0364ded3c6ac51bb6647c92d79339c69415a299: Status 404 returned error can't find the container with id a8cd6fc2cc1529740a5644abc0364ded3c6ac51bb6647c92d79339c69415a299 Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.125791 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:08 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:08 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:08 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.125882 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.140383 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:08 crc kubenswrapper[4922]: E0218 11:39:08.140564 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.6405375 +0000 UTC m=+150.368241580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.140643 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:08 crc kubenswrapper[4922]: E0218 11:39:08.140924 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:39:08.640913339 +0000 UTC m=+150.368617419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wt2rf" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.153265 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dzbt"] Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.230487 4922 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T11:39:08.06589768Z","Handler":null,"Name":""} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.236501 4922 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.236533 4922 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.241791 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.265755 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.298637 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.307153 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.312278 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.345145 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.355272 4922 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.355498 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.404665 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrxb6"] Feb 18 11:39:08 crc kubenswrapper[4922]: W0218 11:39:08.492741 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48ccc67a_6393_4dea_9c00_24bbc55e34d3.slice/crio-cf75383b2030608656ce7bf5f480ccca8d0511a948c0bdf822601c66ee3563e2 WatchSource:0}: Error finding container cf75383b2030608656ce7bf5f480ccca8d0511a948c0bdf822601c66ee3563e2: Status 404 returned error can't find the container with id cf75383b2030608656ce7bf5f480ccca8d0511a948c0bdf822601c66ee3563e2 Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.521152 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wt2rf\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.700579 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.701439 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.707626 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.707883 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.721635 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.753149 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7twg2" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.763180 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.852691 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45f5b001-7d04-46f6-b77d-f79f28d8513e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45f5b001-7d04-46f6-b77d-f79f28d8513e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.852787 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45f5b001-7d04-46f6-b77d-f79f28d8513e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45f5b001-7d04-46f6-b77d-f79f28d8513e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.869802 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5vjsn"] Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.873158 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.882948 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.905692 4922 generic.go:334] "Generic (PLEG): container finished" podID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerID="120adbdad8789c27eefe3c782cdf2eec2b4857607b20057ec0fcf6bbe6831fd0" exitCode=0 Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.905758 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcnjk" event={"ID":"fc9b41f8-ac9b-4166-a2a6-80326e19254a","Type":"ContainerDied","Data":"120adbdad8789c27eefe3c782cdf2eec2b4857607b20057ec0fcf6bbe6831fd0"} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.905791 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcnjk" event={"ID":"fc9b41f8-ac9b-4166-a2a6-80326e19254a","Type":"ContainerStarted","Data":"a8cd6fc2cc1529740a5644abc0364ded3c6ac51bb6647c92d79339c69415a299"} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.906048 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vjsn"] Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.913254 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.923812 4922 generic.go:334] "Generic (PLEG): container finished" podID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerID="26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16" exitCode=0 Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.923916 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrxb6" event={"ID":"48ccc67a-6393-4dea-9c00-24bbc55e34d3","Type":"ContainerDied","Data":"26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16"} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.923952 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrxb6" event={"ID":"48ccc67a-6393-4dea-9c00-24bbc55e34d3","Type":"ContainerStarted","Data":"cf75383b2030608656ce7bf5f480ccca8d0511a948c0bdf822601c66ee3563e2"} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.938348 4922 generic.go:334] "Generic (PLEG): container finished" podID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerID="51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605" exitCode=0 Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.938486 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5lflw" event={"ID":"9cddee0a-8b13-429b-89b6-e820f8f3ec59","Type":"ContainerDied","Data":"51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605"} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.938511 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5lflw" event={"ID":"9cddee0a-8b13-429b-89b6-e820f8f3ec59","Type":"ContainerStarted","Data":"c1ce59c10870c2ecd21ad32da1730316e1c9e1d338deac7b1c3b3f7688db298c"} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.944641 4922 generic.go:334] "Generic (PLEG): container finished" podID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerID="f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c" exitCode=0 Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.947618 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dzbt" event={"ID":"fe4edbcb-8a38-4f30-975f-aa4825192b4e","Type":"ContainerDied","Data":"f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c"} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.947646 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dzbt" event={"ID":"fe4edbcb-8a38-4f30-975f-aa4825192b4e","Type":"ContainerStarted","Data":"aa0626d406720474e06eba27d9c88b12751f048f72073c63b3e1e91b6784d080"} Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.956003 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7zw6\" (UniqueName: \"kubernetes.io/projected/bf0d2342-e758-43cc-8c89-adc3ceb98453-kube-api-access-x7zw6\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.956132 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45f5b001-7d04-46f6-b77d-f79f28d8513e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45f5b001-7d04-46f6-b77d-f79f28d8513e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.956154 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-catalog-content\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.956337 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-utilities\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.956643 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45f5b001-7d04-46f6-b77d-f79f28d8513e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45f5b001-7d04-46f6-b77d-f79f28d8513e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.957096 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45f5b001-7d04-46f6-b77d-f79f28d8513e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"45f5b001-7d04-46f6-b77d-f79f28d8513e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:08 crc kubenswrapper[4922]: W0218 11:39:08.957877 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-2a5135b69f4b3807d7677aeea252c6b9b3eb722654036f6b8e1f09af0f3488da WatchSource:0}: Error finding container 2a5135b69f4b3807d7677aeea252c6b9b3eb722654036f6b8e1f09af0f3488da: Status 404 returned error can't find the container with id 2a5135b69f4b3807d7677aeea252c6b9b3eb722654036f6b8e1f09af0f3488da Feb 18 11:39:08 crc kubenswrapper[4922]: I0218 11:39:08.981833 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45f5b001-7d04-46f6-b77d-f79f28d8513e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"45f5b001-7d04-46f6-b77d-f79f28d8513e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:08.998989 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.041299 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.058480 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7zw6\" (UniqueName: \"kubernetes.io/projected/bf0d2342-e758-43cc-8c89-adc3ceb98453-kube-api-access-x7zw6\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.058647 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-catalog-content\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.058697 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-utilities\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.059806 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-utilities\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.061951 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-catalog-content\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.098479 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7zw6\" (UniqueName: \"kubernetes.io/projected/bf0d2342-e758-43cc-8c89-adc3ceb98453-kube-api-access-x7zw6\") pod \"redhat-marketplace-5vjsn\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.130733 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:09 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:09 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:09 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.130794 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.160220 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wt2rf"] Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.198500 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:39:09 crc kubenswrapper[4922]: W0218 11:39:09.200926 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9f6f1e_d5ab_4de4_b8b4_ee14f742f2e0.slice/crio-ea1a8d4dd7f49c86b94455df1de54cf958a56e87072aa59f4711d51402743ec5 WatchSource:0}: Error finding container ea1a8d4dd7f49c86b94455df1de54cf958a56e87072aa59f4711d51402743ec5: Status 404 returned error can't find the container with id ea1a8d4dd7f49c86b94455df1de54cf958a56e87072aa59f4711d51402743ec5 Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.296909 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nf4nk"] Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.304794 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.319740 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf4nk"] Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.369107 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.466905 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-utilities\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.467001 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-catalog-content\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.467087 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzdsv\" (UniqueName: \"kubernetes.io/projected/d0a05956-6087-461d-a271-52db98c6032a-kube-api-access-tzdsv\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.568067 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75c707c4-5c62-438f-8312-2307d3ef0ba8-secret-volume\") pod \"75c707c4-5c62-438f-8312-2307d3ef0ba8\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.568496 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdbmc\" (UniqueName: \"kubernetes.io/projected/75c707c4-5c62-438f-8312-2307d3ef0ba8-kube-api-access-qdbmc\") pod \"75c707c4-5c62-438f-8312-2307d3ef0ba8\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.568601 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c707c4-5c62-438f-8312-2307d3ef0ba8-config-volume\") pod \"75c707c4-5c62-438f-8312-2307d3ef0ba8\" (UID: \"75c707c4-5c62-438f-8312-2307d3ef0ba8\") " Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.568747 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-catalog-content\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.568770 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzdsv\" (UniqueName: \"kubernetes.io/projected/d0a05956-6087-461d-a271-52db98c6032a-kube-api-access-tzdsv\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.568828 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-utilities\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.569460 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-utilities\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.570031 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-catalog-content\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.570118 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75c707c4-5c62-438f-8312-2307d3ef0ba8-config-volume" (OuterVolumeSpecName: "config-volume") pod "75c707c4-5c62-438f-8312-2307d3ef0ba8" (UID: "75c707c4-5c62-438f-8312-2307d3ef0ba8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.575004 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c707c4-5c62-438f-8312-2307d3ef0ba8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75c707c4-5c62-438f-8312-2307d3ef0ba8" (UID: "75c707c4-5c62-438f-8312-2307d3ef0ba8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.585222 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c707c4-5c62-438f-8312-2307d3ef0ba8-kube-api-access-qdbmc" (OuterVolumeSpecName: "kube-api-access-qdbmc") pod "75c707c4-5c62-438f-8312-2307d3ef0ba8" (UID: "75c707c4-5c62-438f-8312-2307d3ef0ba8"). InnerVolumeSpecName "kube-api-access-qdbmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.588694 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzdsv\" (UniqueName: \"kubernetes.io/projected/d0a05956-6087-461d-a271-52db98c6032a-kube-api-access-tzdsv\") pod \"redhat-marketplace-nf4nk\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.620898 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 11:39:09 crc kubenswrapper[4922]: W0218 11:39:09.629795 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod45f5b001_7d04_46f6_b77d_f79f28d8513e.slice/crio-a9b4977daf388120fe76f0f273845c52dc404c3e21cd7f8660810678fb42ce9c WatchSource:0}: Error finding container a9b4977daf388120fe76f0f273845c52dc404c3e21cd7f8660810678fb42ce9c: Status 404 returned error can't find the container with id a9b4977daf388120fe76f0f273845c52dc404c3e21cd7f8660810678fb42ce9c Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.662683 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.669039 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vjsn"] Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.669798 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c707c4-5c62-438f-8312-2307d3ef0ba8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.669824 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75c707c4-5c62-438f-8312-2307d3ef0ba8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.669835 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdbmc\" (UniqueName: \"kubernetes.io/projected/75c707c4-5c62-438f-8312-2307d3ef0ba8-kube-api-access-qdbmc\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:09 crc kubenswrapper[4922]: W0218 11:39:09.678135 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf0d2342_e758_43cc_8c89_adc3ceb98453.slice/crio-253c1fe74cd3ee618dc4b79bd02124aafb18cc9d88de6205e0a30abc2ecadb34 WatchSource:0}: Error finding container 253c1fe74cd3ee618dc4b79bd02124aafb18cc9d88de6205e0a30abc2ecadb34: Status 404 returned error can't find the container with id 253c1fe74cd3ee618dc4b79bd02124aafb18cc9d88de6205e0a30abc2ecadb34 Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.722829 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.724022 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.726687 4922 patch_prober.go:28] interesting pod/console-f9d7485db-nfn89 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.726737 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nfn89" podUID="14e81dbf-6c73-481c-b758-4c15cc0f3258" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.810873 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.810946 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.835423 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.921520 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf4nk"] Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.943270 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sddqb" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.952251 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.952236 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv" event={"ID":"75c707c4-5c62-438f-8312-2307d3ef0ba8","Type":"ContainerDied","Data":"28c2cf88217349a49db593635ea8e128208ef1ae24d7cc6d1020cc30632765bf"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.952854 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c2cf88217349a49db593635ea8e128208ef1ae24d7cc6d1020cc30632765bf" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.953975 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4eb08f37e8241ad9b183d2b32c46286809fafe4a88a928b684e7f920b1a52932"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.954020 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2a5135b69f4b3807d7677aeea252c6b9b3eb722654036f6b8e1f09af0f3488da"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.957647 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d226335892aa0135b931a64fb29dcd629371ddaecbe8c104cc966a127e262f55"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.957693 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9afdc23b817126f5d66c3dd8ca0fb5dd583325bf96b4bbfe24218da4f9354664"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.958598 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.960583 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"eaeb02c76dcba489d6dea1f6a0d48a46cdcf6a4ee0f90ed3b572ff1b0654f3a8"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.960609 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8ed6222e5539873a8dc19ddf3c314d51efbdcdb13cb12f6de6e8a9cf3359f762"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.967758 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" event={"ID":"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0","Type":"ContainerStarted","Data":"9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.967798 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" event={"ID":"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0","Type":"ContainerStarted","Data":"ea1a8d4dd7f49c86b94455df1de54cf958a56e87072aa59f4711d51402743ec5"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.967815 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.980311 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-b6dxx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.980329 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-b6dxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.980900 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b6dxx" podUID="48dabf7e-d1d7-48b6-bc70-5cc88cdcf994" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.980380 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-b6dxx" podUID="48dabf7e-d1d7-48b6-bc70-5cc88cdcf994" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 18 11:39:09 crc kubenswrapper[4922]: W0218 11:39:09.984164 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0a05956_6087_461d_a271_52db98c6032a.slice/crio-a1f5dac4997bf52c337b4ff07c819ce5fc50051e9cba18a9a9a797070bbd762b WatchSource:0}: Error finding container a1f5dac4997bf52c337b4ff07c819ce5fc50051e9cba18a9a9a797070bbd762b: Status 404 returned error can't find the container with id a1f5dac4997bf52c337b4ff07c819ce5fc50051e9cba18a9a9a797070bbd762b Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.991909 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vjsn" event={"ID":"bf0d2342-e758-43cc-8c89-adc3ceb98453","Type":"ContainerDied","Data":"88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50"} Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.992091 4922 generic.go:334] "Generic (PLEG): container finished" podID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerID="88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50" exitCode=0 Feb 18 11:39:09 crc kubenswrapper[4922]: I0218 11:39:09.992165 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vjsn" event={"ID":"bf0d2342-e758-43cc-8c89-adc3ceb98453","Type":"ContainerStarted","Data":"253c1fe74cd3ee618dc4b79bd02124aafb18cc9d88de6205e0a30abc2ecadb34"} Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.011326 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45f5b001-7d04-46f6-b77d-f79f28d8513e","Type":"ContainerStarted","Data":"a9b4977daf388120fe76f0f273845c52dc404c3e21cd7f8660810678fb42ce9c"} Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.056199 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" podStartSLOduration=132.056179417 podStartE2EDuration="2m12.056179417s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:10.053710404 +0000 UTC m=+151.781414484" watchObservedRunningTime="2026-02-18 11:39:10.056179417 +0000 UTC m=+151.783883497" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.086110 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.094129 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vhdd8" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.115710 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.118809 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:10 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:10 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:10 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.118856 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.273263 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wz74v"] Feb 18 11:39:10 crc kubenswrapper[4922]: E0218 11:39:10.273554 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c707c4-5c62-438f-8312-2307d3ef0ba8" containerName="collect-profiles" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.273568 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c707c4-5c62-438f-8312-2307d3ef0ba8" containerName="collect-profiles" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.273689 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c707c4-5c62-438f-8312-2307d3ef0ba8" containerName="collect-profiles" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.280197 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.284023 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.291278 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wz74v"] Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.381864 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z6qm\" (UniqueName: \"kubernetes.io/projected/47c627d0-6fb9-4b77-b266-74670361fcd6-kube-api-access-4z6qm\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.381906 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-catalog-content\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.381999 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-utilities\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.482919 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-utilities\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.483218 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z6qm\" (UniqueName: \"kubernetes.io/projected/47c627d0-6fb9-4b77-b266-74670361fcd6-kube-api-access-4z6qm\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.483237 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-catalog-content\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.483669 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-catalog-content\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.483872 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-utilities\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.522455 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z6qm\" (UniqueName: \"kubernetes.io/projected/47c627d0-6fb9-4b77-b266-74670361fcd6-kube-api-access-4z6qm\") pod \"redhat-operators-wz74v\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.582961 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.606254 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.672805 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wg8w6"] Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.677701 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.679984 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg8w6"] Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.787992 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-utilities\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.788041 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c89j\" (UniqueName: \"kubernetes.io/projected/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-kube-api-access-8c89j\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.788089 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-catalog-content\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.889998 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-utilities\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.890249 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c89j\" (UniqueName: \"kubernetes.io/projected/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-kube-api-access-8c89j\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.890279 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-catalog-content\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.890523 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-utilities\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.890563 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-catalog-content\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.914027 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c89j\" (UniqueName: \"kubernetes.io/projected/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-kube-api-access-8c89j\") pod \"redhat-operators-wg8w6\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:10 crc kubenswrapper[4922]: I0218 11:39:10.993810 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.028219 4922 generic.go:334] "Generic (PLEG): container finished" podID="d0a05956-6087-461d-a271-52db98c6032a" containerID="e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0" exitCode=0 Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.028296 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf4nk" event={"ID":"d0a05956-6087-461d-a271-52db98c6032a","Type":"ContainerDied","Data":"e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0"} Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.028322 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf4nk" event={"ID":"d0a05956-6087-461d-a271-52db98c6032a","Type":"ContainerStarted","Data":"a1f5dac4997bf52c337b4ff07c819ce5fc50051e9cba18a9a9a797070bbd762b"} Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.031827 4922 generic.go:334] "Generic (PLEG): container finished" podID="45f5b001-7d04-46f6-b77d-f79f28d8513e" containerID="1a5b9e642430835de9898ea0d1086d2036a4ac3e11f7db0b73326129db5097a8" exitCode=0 Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.031968 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45f5b001-7d04-46f6-b77d-f79f28d8513e","Type":"ContainerDied","Data":"1a5b9e642430835de9898ea0d1086d2036a4ac3e11f7db0b73326129db5097a8"} Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.119597 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:11 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:11 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:11 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.119741 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.174356 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wz74v"] Feb 18 11:39:11 crc kubenswrapper[4922]: I0218 11:39:11.265139 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg8w6"] Feb 18 11:39:11 crc kubenswrapper[4922]: W0218 11:39:11.307428 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80efc1b6_0ae2_4cbf_8dc9_0e2c4d526f54.slice/crio-25d2b5e0e4b5a6b68be1bcb304a04f4b6084a6c0bdb0404fc5532afe958fbbd6 WatchSource:0}: Error finding container 25d2b5e0e4b5a6b68be1bcb304a04f4b6084a6c0bdb0404fc5532afe958fbbd6: Status 404 returned error can't find the container with id 25d2b5e0e4b5a6b68be1bcb304a04f4b6084a6c0bdb0404fc5532afe958fbbd6 Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.041411 4922 generic.go:334] "Generic (PLEG): container finished" podID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerID="339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679" exitCode=0 Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.041579 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wz74v" event={"ID":"47c627d0-6fb9-4b77-b266-74670361fcd6","Type":"ContainerDied","Data":"339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679"} Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.041713 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wz74v" event={"ID":"47c627d0-6fb9-4b77-b266-74670361fcd6","Type":"ContainerStarted","Data":"86936b3faf99a98562fccc6ce9e3e9f7de7879c692a3b15d363c67f9bb07864e"} Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.045643 4922 generic.go:334] "Generic (PLEG): container finished" podID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerID="8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777" exitCode=0 Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.045725 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg8w6" event={"ID":"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54","Type":"ContainerDied","Data":"8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777"} Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.045765 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg8w6" event={"ID":"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54","Type":"ContainerStarted","Data":"25d2b5e0e4b5a6b68be1bcb304a04f4b6084a6c0bdb0404fc5532afe958fbbd6"} Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.118099 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:12 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:12 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:12 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.118147 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.292897 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.424740 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45f5b001-7d04-46f6-b77d-f79f28d8513e-kubelet-dir\") pod \"45f5b001-7d04-46f6-b77d-f79f28d8513e\" (UID: \"45f5b001-7d04-46f6-b77d-f79f28d8513e\") " Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.424884 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45f5b001-7d04-46f6-b77d-f79f28d8513e-kube-api-access\") pod \"45f5b001-7d04-46f6-b77d-f79f28d8513e\" (UID: \"45f5b001-7d04-46f6-b77d-f79f28d8513e\") " Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.426097 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45f5b001-7d04-46f6-b77d-f79f28d8513e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "45f5b001-7d04-46f6-b77d-f79f28d8513e" (UID: "45f5b001-7d04-46f6-b77d-f79f28d8513e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.437563 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f5b001-7d04-46f6-b77d-f79f28d8513e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "45f5b001-7d04-46f6-b77d-f79f28d8513e" (UID: "45f5b001-7d04-46f6-b77d-f79f28d8513e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.526736 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45f5b001-7d04-46f6-b77d-f79f28d8513e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.526799 4922 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45f5b001-7d04-46f6-b77d-f79f28d8513e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.722375 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:39:12 crc kubenswrapper[4922]: I0218 11:39:12.726749 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ks48g" Feb 18 11:39:13 crc kubenswrapper[4922]: I0218 11:39:13.072827 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"45f5b001-7d04-46f6-b77d-f79f28d8513e","Type":"ContainerDied","Data":"a9b4977daf388120fe76f0f273845c52dc404c3e21cd7f8660810678fb42ce9c"} Feb 18 11:39:13 crc kubenswrapper[4922]: I0218 11:39:13.073189 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b4977daf388120fe76f0f273845c52dc404c3e21cd7f8660810678fb42ce9c" Feb 18 11:39:13 crc kubenswrapper[4922]: I0218 11:39:13.072904 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:39:13 crc kubenswrapper[4922]: I0218 11:39:13.119329 4922 patch_prober.go:28] interesting pod/router-default-5444994796-ktkz9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:39:13 crc kubenswrapper[4922]: [-]has-synced failed: reason withheld Feb 18 11:39:13 crc kubenswrapper[4922]: [+]process-running ok Feb 18 11:39:13 crc kubenswrapper[4922]: healthz check failed Feb 18 11:39:13 crc kubenswrapper[4922]: I0218 11:39:13.119429 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ktkz9" podUID="9f7b66c5-b258-4314-b3a5-e08b958245b6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.128259 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.131316 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ktkz9" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.249439 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 11:39:14 crc kubenswrapper[4922]: E0218 11:39:14.249787 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f5b001-7d04-46f6-b77d-f79f28d8513e" containerName="pruner" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.249809 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f5b001-7d04-46f6-b77d-f79f28d8513e" containerName="pruner" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.249936 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f5b001-7d04-46f6-b77d-f79f28d8513e" containerName="pruner" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.250452 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.252593 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.252983 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.259184 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.390301 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.390405 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.492176 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.492247 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.492359 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.521041 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:14 crc kubenswrapper[4922]: I0218 11:39:14.590855 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:15 crc kubenswrapper[4922]: I0218 11:39:15.232753 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 11:39:15 crc kubenswrapper[4922]: I0218 11:39:15.693864 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gbpm4" Feb 18 11:39:16 crc kubenswrapper[4922]: I0218 11:39:16.114324 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6a4f0f44-f90d-4e2e-a41a-0d785c890c11","Type":"ContainerStarted","Data":"59b18b090d0689c87a8a1db521371c6f148f035c17433d7c2edcc69c82db0fb3"} Feb 18 11:39:16 crc kubenswrapper[4922]: I0218 11:39:16.114393 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6a4f0f44-f90d-4e2e-a41a-0d785c890c11","Type":"ContainerStarted","Data":"ccbaee4d994d716c70b98bd2ca44e40786371cc7260f2de61281d7fb8c3aecf3"} Feb 18 11:39:17 crc kubenswrapper[4922]: I0218 11:39:17.123064 4922 generic.go:334] "Generic (PLEG): container finished" podID="6a4f0f44-f90d-4e2e-a41a-0d785c890c11" containerID="59b18b090d0689c87a8a1db521371c6f148f035c17433d7c2edcc69c82db0fb3" exitCode=0 Feb 18 11:39:17 crc kubenswrapper[4922]: I0218 11:39:17.123280 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6a4f0f44-f90d-4e2e-a41a-0d785c890c11","Type":"ContainerDied","Data":"59b18b090d0689c87a8a1db521371c6f148f035c17433d7c2edcc69c82db0fb3"} Feb 18 11:39:19 crc kubenswrapper[4922]: I0218 11:39:19.723784 4922 patch_prober.go:28] interesting pod/console-f9d7485db-nfn89 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 18 11:39:19 crc kubenswrapper[4922]: I0218 11:39:19.724544 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nfn89" podUID="14e81dbf-6c73-481c-b758-4c15cc0f3258" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 18 11:39:19 crc kubenswrapper[4922]: I0218 11:39:19.979175 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-b6dxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 18 11:39:19 crc kubenswrapper[4922]: I0218 11:39:19.979227 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b6dxx" podUID="48dabf7e-d1d7-48b6-bc70-5cc88cdcf994" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 18 11:39:19 crc kubenswrapper[4922]: I0218 11:39:19.980595 4922 patch_prober.go:28] interesting pod/downloads-7954f5f757-b6dxx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 18 11:39:19 crc kubenswrapper[4922]: I0218 11:39:19.980670 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-b6dxx" podUID="48dabf7e-d1d7-48b6-bc70-5cc88cdcf994" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 18 11:39:21 crc kubenswrapper[4922]: I0218 11:39:21.816927 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:39:21 crc kubenswrapper[4922]: I0218 11:39:21.823422 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4702cf45-b47b-4291-a553-5bfc7bc22674-metrics-certs\") pod \"network-metrics-daemon-pspfr\" (UID: \"4702cf45-b47b-4291-a553-5bfc7bc22674\") " pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:39:21 crc kubenswrapper[4922]: I0218 11:39:21.994587 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pspfr" Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.085171 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.120713 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kube-api-access\") pod \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\" (UID: \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\") " Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.120769 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kubelet-dir\") pod \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\" (UID: \"6a4f0f44-f90d-4e2e-a41a-0d785c890c11\") " Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.120968 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6a4f0f44-f90d-4e2e-a41a-0d785c890c11" (UID: "6a4f0f44-f90d-4e2e-a41a-0d785c890c11"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.124274 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6a4f0f44-f90d-4e2e-a41a-0d785c890c11" (UID: "6a4f0f44-f90d-4e2e-a41a-0d785c890c11"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.153437 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6a4f0f44-f90d-4e2e-a41a-0d785c890c11","Type":"ContainerDied","Data":"ccbaee4d994d716c70b98bd2ca44e40786371cc7260f2de61281d7fb8c3aecf3"} Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.153575 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccbaee4d994d716c70b98bd2ca44e40786371cc7260f2de61281d7fb8c3aecf3" Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.153480 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.222024 4922 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:22 crc kubenswrapper[4922]: I0218 11:39:22.222065 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4f0f44-f90d-4e2e-a41a-0d785c890c11-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:27 crc kubenswrapper[4922]: I0218 11:39:27.486049 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pj7qx"] Feb 18 11:39:27 crc kubenswrapper[4922]: I0218 11:39:27.486717 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" podUID="158f0672-c017-4e45-a564-96de81f21772" containerName="controller-manager" containerID="cri-o://5e927c6e4399c8def57efddaefd76cf053c5e57aa111182c3f59a17801fceec0" gracePeriod=30 Feb 18 11:39:27 crc kubenswrapper[4922]: I0218 11:39:27.512791 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j"] Feb 18 11:39:27 crc kubenswrapper[4922]: I0218 11:39:27.513468 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" podUID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" containerName="route-controller-manager" containerID="cri-o://6194b5e6d35bc369d7c503f0dab1c1b4699e56c00e185999f0dd2d244bb4e4b6" gracePeriod=30 Feb 18 11:39:28 crc kubenswrapper[4922]: I0218 11:39:28.772613 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:39:29 crc kubenswrapper[4922]: I0218 11:39:29.728073 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:39:29 crc kubenswrapper[4922]: I0218 11:39:29.735182 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:39:29 crc kubenswrapper[4922]: I0218 11:39:29.927109 4922 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-v6m8j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 18 11:39:29 crc kubenswrapper[4922]: I0218 11:39:29.927171 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" podUID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 18 11:39:29 crc kubenswrapper[4922]: I0218 11:39:29.984705 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-b6dxx" Feb 18 11:39:30 crc kubenswrapper[4922]: I0218 11:39:30.831915 4922 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pj7qx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:39:30 crc kubenswrapper[4922]: I0218 11:39:30.832910 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" podUID="158f0672-c017-4e45-a564-96de81f21772" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:39:35 crc kubenswrapper[4922]: I0218 11:39:35.236661 4922 generic.go:334] "Generic (PLEG): container finished" podID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" containerID="6194b5e6d35bc369d7c503f0dab1c1b4699e56c00e185999f0dd2d244bb4e4b6" exitCode=0 Feb 18 11:39:35 crc kubenswrapper[4922]: I0218 11:39:35.236750 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" event={"ID":"c3bb2fc9-822c-4f53-98bf-70933744cf7f","Type":"ContainerDied","Data":"6194b5e6d35bc369d7c503f0dab1c1b4699e56c00e185999f0dd2d244bb4e4b6"} Feb 18 11:39:35 crc kubenswrapper[4922]: I0218 11:39:35.239221 4922 generic.go:334] "Generic (PLEG): container finished" podID="158f0672-c017-4e45-a564-96de81f21772" containerID="5e927c6e4399c8def57efddaefd76cf053c5e57aa111182c3f59a17801fceec0" exitCode=0 Feb 18 11:39:35 crc kubenswrapper[4922]: I0218 11:39:35.239262 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" event={"ID":"158f0672-c017-4e45-a564-96de81f21772","Type":"ContainerDied","Data":"5e927c6e4399c8def57efddaefd76cf053c5e57aa111182c3f59a17801fceec0"} Feb 18 11:39:39 crc kubenswrapper[4922]: I0218 11:39:39.807674 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:39:39 crc kubenswrapper[4922]: I0218 11:39:39.808434 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:39:40 crc kubenswrapper[4922]: I0218 11:39:40.553722 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rj4b9" Feb 18 11:39:40 crc kubenswrapper[4922]: I0218 11:39:40.833016 4922 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pj7qx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:39:40 crc kubenswrapper[4922]: I0218 11:39:40.833081 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" podUID="158f0672-c017-4e45-a564-96de81f21772" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:39:40 crc kubenswrapper[4922]: I0218 11:39:40.926878 4922 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-v6m8j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:39:40 crc kubenswrapper[4922]: I0218 11:39:40.926940 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" podUID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:39:42 crc kubenswrapper[4922]: E0218 11:39:42.238139 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 11:39:42 crc kubenswrapper[4922]: E0218 11:39:42.238892 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8c89j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wg8w6_openshift-marketplace(80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:39:42 crc kubenswrapper[4922]: E0218 11:39:42.240215 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wg8w6" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" Feb 18 11:39:43 crc kubenswrapper[4922]: E0218 11:39:43.342083 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wg8w6" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" Feb 18 11:39:43 crc kubenswrapper[4922]: E0218 11:39:43.428923 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 11:39:43 crc kubenswrapper[4922]: E0218 11:39:43.429214 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzdsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nf4nk_openshift-marketplace(d0a05956-6087-461d-a271-52db98c6032a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:39:43 crc kubenswrapper[4922]: E0218 11:39:43.430901 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nf4nk" podUID="d0a05956-6087-461d-a271-52db98c6032a" Feb 18 11:39:43 crc kubenswrapper[4922]: E0218 11:39:43.488745 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 11:39:43 crc kubenswrapper[4922]: E0218 11:39:43.489160 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7zw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5vjsn_openshift-marketplace(bf0d2342-e758-43cc-8c89-adc3ceb98453): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:39:43 crc kubenswrapper[4922]: E0218 11:39:43.490407 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5vjsn" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" Feb 18 11:39:45 crc kubenswrapper[4922]: E0218 11:39:45.510707 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5vjsn" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" Feb 18 11:39:45 crc kubenswrapper[4922]: E0218 11:39:45.511347 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nf4nk" podUID="d0a05956-6087-461d-a271-52db98c6032a" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.563496 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.570539 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.593670 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx"] Feb 18 11:39:45 crc kubenswrapper[4922]: E0218 11:39:45.593935 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4f0f44-f90d-4e2e-a41a-0d785c890c11" containerName="pruner" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.593952 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4f0f44-f90d-4e2e-a41a-0d785c890c11" containerName="pruner" Feb 18 11:39:45 crc kubenswrapper[4922]: E0218 11:39:45.593975 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158f0672-c017-4e45-a564-96de81f21772" containerName="controller-manager" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.593984 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="158f0672-c017-4e45-a564-96de81f21772" containerName="controller-manager" Feb 18 11:39:45 crc kubenswrapper[4922]: E0218 11:39:45.594000 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" containerName="route-controller-manager" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.594009 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" containerName="route-controller-manager" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.594134 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="158f0672-c017-4e45-a564-96de81f21772" containerName="controller-manager" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.594148 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" containerName="route-controller-manager" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.594164 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4f0f44-f90d-4e2e-a41a-0d785c890c11" containerName="pruner" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.594649 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.605675 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx"] Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667555 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k46td\" (UniqueName: \"kubernetes.io/projected/158f0672-c017-4e45-a564-96de81f21772-kube-api-access-k46td\") pod \"158f0672-c017-4e45-a564-96de81f21772\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667616 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158f0672-c017-4e45-a564-96de81f21772-serving-cert\") pod \"158f0672-c017-4e45-a564-96de81f21772\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667680 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-config\") pod \"158f0672-c017-4e45-a564-96de81f21772\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667716 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-client-ca\") pod \"158f0672-c017-4e45-a564-96de81f21772\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667743 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bb2fc9-822c-4f53-98bf-70933744cf7f-serving-cert\") pod \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667782 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-config\") pod \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667809 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-client-ca\") pod \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667844 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lkpw\" (UniqueName: \"kubernetes.io/projected/c3bb2fc9-822c-4f53-98bf-70933744cf7f-kube-api-access-6lkpw\") pod \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\" (UID: \"c3bb2fc9-822c-4f53-98bf-70933744cf7f\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.667890 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-proxy-ca-bundles\") pod \"158f0672-c017-4e45-a564-96de81f21772\" (UID: \"158f0672-c017-4e45-a564-96de81f21772\") " Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.668115 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjhth\" (UniqueName: \"kubernetes.io/projected/15bde054-692c-48d7-9289-37ba209fa899-kube-api-access-tjhth\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.668160 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-client-ca\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.668192 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15bde054-692c-48d7-9289-37ba209fa899-serving-cert\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.668216 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-config\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.668448 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-client-ca" (OuterVolumeSpecName: "client-ca") pod "158f0672-c017-4e45-a564-96de81f21772" (UID: "158f0672-c017-4e45-a564-96de81f21772"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.668587 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-config" (OuterVolumeSpecName: "config") pod "158f0672-c017-4e45-a564-96de81f21772" (UID: "158f0672-c017-4e45-a564-96de81f21772"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.668607 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-config" (OuterVolumeSpecName: "config") pod "c3bb2fc9-822c-4f53-98bf-70933744cf7f" (UID: "c3bb2fc9-822c-4f53-98bf-70933744cf7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.668801 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "158f0672-c017-4e45-a564-96de81f21772" (UID: "158f0672-c017-4e45-a564-96de81f21772"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.669133 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c3bb2fc9-822c-4f53-98bf-70933744cf7f" (UID: "c3bb2fc9-822c-4f53-98bf-70933744cf7f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.681742 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3bb2fc9-822c-4f53-98bf-70933744cf7f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c3bb2fc9-822c-4f53-98bf-70933744cf7f" (UID: "c3bb2fc9-822c-4f53-98bf-70933744cf7f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.682874 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158f0672-c017-4e45-a564-96de81f21772-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "158f0672-c017-4e45-a564-96de81f21772" (UID: "158f0672-c017-4e45-a564-96de81f21772"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.682946 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158f0672-c017-4e45-a564-96de81f21772-kube-api-access-k46td" (OuterVolumeSpecName: "kube-api-access-k46td") pod "158f0672-c017-4e45-a564-96de81f21772" (UID: "158f0672-c017-4e45-a564-96de81f21772"). InnerVolumeSpecName "kube-api-access-k46td". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.683503 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3bb2fc9-822c-4f53-98bf-70933744cf7f-kube-api-access-6lkpw" (OuterVolumeSpecName: "kube-api-access-6lkpw") pod "c3bb2fc9-822c-4f53-98bf-70933744cf7f" (UID: "c3bb2fc9-822c-4f53-98bf-70933744cf7f"). InnerVolumeSpecName "kube-api-access-6lkpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769054 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-client-ca\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769114 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15bde054-692c-48d7-9289-37ba209fa899-serving-cert\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769134 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-config\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769195 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjhth\" (UniqueName: \"kubernetes.io/projected/15bde054-692c-48d7-9289-37ba209fa899-kube-api-access-tjhth\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769260 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769275 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769288 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3bb2fc9-822c-4f53-98bf-70933744cf7f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769299 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769309 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3bb2fc9-822c-4f53-98bf-70933744cf7f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769320 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lkpw\" (UniqueName: \"kubernetes.io/projected/c3bb2fc9-822c-4f53-98bf-70933744cf7f-kube-api-access-6lkpw\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769332 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/158f0672-c017-4e45-a564-96de81f21772-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769343 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k46td\" (UniqueName: \"kubernetes.io/projected/158f0672-c017-4e45-a564-96de81f21772-kube-api-access-k46td\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.769353 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/158f0672-c017-4e45-a564-96de81f21772-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.770719 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-client-ca\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.770770 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-config\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.773060 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15bde054-692c-48d7-9289-37ba209fa899-serving-cert\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.785518 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjhth\" (UniqueName: \"kubernetes.io/projected/15bde054-692c-48d7-9289-37ba209fa899-kube-api-access-tjhth\") pod \"route-controller-manager-588997d685-rq7xx\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: I0218 11:39:45.932565 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:45 crc kubenswrapper[4922]: E0218 11:39:45.981733 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 11:39:45 crc kubenswrapper[4922]: E0218 11:39:45.981880 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvvjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lcnjk_openshift-marketplace(fc9b41f8-ac9b-4166-a2a6-80326e19254a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:39:45 crc kubenswrapper[4922]: E0218 11:39:45.983050 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lcnjk" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.086144 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pspfr"] Feb 18 11:39:46 crc kubenswrapper[4922]: E0218 11:39:46.176234 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 11:39:46 crc kubenswrapper[4922]: E0218 11:39:46.176468 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x5ts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5lflw_openshift-marketplace(9cddee0a-8b13-429b-89b6-e820f8f3ec59): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:39:46 crc kubenswrapper[4922]: E0218 11:39:46.177680 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5lflw" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.186191 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx"] Feb 18 11:39:46 crc kubenswrapper[4922]: W0218 11:39:46.196489 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15bde054_692c_48d7_9289_37ba209fa899.slice/crio-4ffa7a4229a3fad7ed4a84f18a413cfd9e9c021ec94c8917ef154f3e1f7c78a5 WatchSource:0}: Error finding container 4ffa7a4229a3fad7ed4a84f18a413cfd9e9c021ec94c8917ef154f3e1f7c78a5: Status 404 returned error can't find the container with id 4ffa7a4229a3fad7ed4a84f18a413cfd9e9c021ec94c8917ef154f3e1f7c78a5 Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.293112 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" event={"ID":"c3bb2fc9-822c-4f53-98bf-70933744cf7f","Type":"ContainerDied","Data":"52db1346d3e5c28577ccc9c9f0dc56c1b32dc4dc3a9883af1710dab9823da00c"} Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.293173 4922 scope.go:117] "RemoveContainer" containerID="6194b5e6d35bc369d7c503f0dab1c1b4699e56c00e185999f0dd2d244bb4e4b6" Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.293141 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j" Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.296096 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" event={"ID":"15bde054-692c-48d7-9289-37ba209fa899","Type":"ContainerStarted","Data":"4ffa7a4229a3fad7ed4a84f18a413cfd9e9c021ec94c8917ef154f3e1f7c78a5"} Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.301601 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wz74v" event={"ID":"47c627d0-6fb9-4b77-b266-74670361fcd6","Type":"ContainerStarted","Data":"e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1"} Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.308007 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dzbt" event={"ID":"fe4edbcb-8a38-4f30-975f-aa4825192b4e","Type":"ContainerStarted","Data":"a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395"} Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.309812 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pspfr" event={"ID":"4702cf45-b47b-4291-a553-5bfc7bc22674","Type":"ContainerStarted","Data":"fe2dfb96ca37dee3ade1541dcfa518d248bc5a915bfb3f141cd9748702d9dc7b"} Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.311003 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" event={"ID":"158f0672-c017-4e45-a564-96de81f21772","Type":"ContainerDied","Data":"2d992c5fc4f578358729ef2e1374662b53a6ba4618dfe838d93d24e0c5e5a820"} Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.311071 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pj7qx" Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.314442 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrxb6" event={"ID":"48ccc67a-6393-4dea-9c00-24bbc55e34d3","Type":"ContainerStarted","Data":"f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1"} Feb 18 11:39:46 crc kubenswrapper[4922]: E0218 11:39:46.315313 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5lflw" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.315404 4922 scope.go:117] "RemoveContainer" containerID="5e927c6e4399c8def57efddaefd76cf053c5e57aa111182c3f59a17801fceec0" Feb 18 11:39:46 crc kubenswrapper[4922]: E0218 11:39:46.315828 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lcnjk" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.399873 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j"] Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.402478 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6m8j"] Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.446989 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pj7qx"] Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.451579 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pj7qx"] Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.979380 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158f0672-c017-4e45-a564-96de81f21772" path="/var/lib/kubelet/pods/158f0672-c017-4e45-a564-96de81f21772/volumes" Feb 18 11:39:46 crc kubenswrapper[4922]: I0218 11:39:46.980242 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3bb2fc9-822c-4f53-98bf-70933744cf7f" path="/var/lib/kubelet/pods/c3bb2fc9-822c-4f53-98bf-70933744cf7f/volumes" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.204197 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-588bb8688f-kgnf7"] Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.205036 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.206706 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.206876 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.207682 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.207933 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.207944 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.212099 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.216757 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588bb8688f-kgnf7"] Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.219415 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.283004 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx"] Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.288906 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/381bae09-292e-47ab-a85b-eeec711acdd9-serving-cert\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.288951 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-client-ca\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.289053 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqh2b\" (UniqueName: \"kubernetes.io/projected/381bae09-292e-47ab-a85b-eeec711acdd9-kube-api-access-dqh2b\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.289089 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-proxy-ca-bundles\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.289114 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-config\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.320413 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pspfr" event={"ID":"4702cf45-b47b-4291-a553-5bfc7bc22674","Type":"ContainerStarted","Data":"8badc5fff288446a9924553a34a6c1a2dbca9ca14cd487ff8689885221e9eb21"} Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.320479 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pspfr" event={"ID":"4702cf45-b47b-4291-a553-5bfc7bc22674","Type":"ContainerStarted","Data":"f5c18659fc59cc701f027079e2e983aa960911b7cfcf99509ebb977934856e7f"} Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.323393 4922 generic.go:334] "Generic (PLEG): container finished" podID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerID="f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1" exitCode=0 Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.323539 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrxb6" event={"ID":"48ccc67a-6393-4dea-9c00-24bbc55e34d3","Type":"ContainerDied","Data":"f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1"} Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.327249 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" event={"ID":"15bde054-692c-48d7-9289-37ba209fa899","Type":"ContainerStarted","Data":"7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c"} Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.327640 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.330408 4922 generic.go:334] "Generic (PLEG): container finished" podID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerID="e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1" exitCode=0 Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.330474 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wz74v" event={"ID":"47c627d0-6fb9-4b77-b266-74670361fcd6","Type":"ContainerDied","Data":"e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1"} Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.332746 4922 generic.go:334] "Generic (PLEG): container finished" podID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerID="a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395" exitCode=0 Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.332778 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dzbt" event={"ID":"fe4edbcb-8a38-4f30-975f-aa4825192b4e","Type":"ContainerDied","Data":"a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395"} Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.335224 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.345054 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pspfr" podStartSLOduration=169.345036652 podStartE2EDuration="2m49.345036652s" podCreationTimestamp="2026-02-18 11:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:47.342211401 +0000 UTC m=+189.069915481" watchObservedRunningTime="2026-02-18 11:39:47.345036652 +0000 UTC m=+189.072740732" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.390329 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/381bae09-292e-47ab-a85b-eeec711acdd9-serving-cert\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.390404 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-client-ca\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.390468 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqh2b\" (UniqueName: \"kubernetes.io/projected/381bae09-292e-47ab-a85b-eeec711acdd9-kube-api-access-dqh2b\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.390543 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-proxy-ca-bundles\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.390589 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-config\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.393381 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-client-ca\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.393630 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-config\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.393821 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-proxy-ca-bundles\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.396893 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" podStartSLOduration=20.396873745 podStartE2EDuration="20.396873745s" podCreationTimestamp="2026-02-18 11:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:47.393204982 +0000 UTC m=+189.120909062" watchObservedRunningTime="2026-02-18 11:39:47.396873745 +0000 UTC m=+189.124577825" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.412675 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqh2b\" (UniqueName: \"kubernetes.io/projected/381bae09-292e-47ab-a85b-eeec711acdd9-kube-api-access-dqh2b\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.429231 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/381bae09-292e-47ab-a85b-eeec711acdd9-serving-cert\") pod \"controller-manager-588bb8688f-kgnf7\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.520779 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:47 crc kubenswrapper[4922]: I0218 11:39:47.702901 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588bb8688f-kgnf7"] Feb 18 11:39:47 crc kubenswrapper[4922]: W0218 11:39:47.709685 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod381bae09_292e_47ab_a85b_eeec711acdd9.slice/crio-24eefe81e8715469a6d268edf7354dca96b3d7cb46067661d42653e51f19e9c9 WatchSource:0}: Error finding container 24eefe81e8715469a6d268edf7354dca96b3d7cb46067661d42653e51f19e9c9: Status 404 returned error can't find the container with id 24eefe81e8715469a6d268edf7354dca96b3d7cb46067661d42653e51f19e9c9 Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.314671 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.341304 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrxb6" event={"ID":"48ccc67a-6393-4dea-9c00-24bbc55e34d3","Type":"ContainerStarted","Data":"ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd"} Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.343593 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" event={"ID":"381bae09-292e-47ab-a85b-eeec711acdd9","Type":"ContainerStarted","Data":"888942b059f1b7ce4402d4d451bbe5ffef2dc01e980cc850af5d6d975e6db9a0"} Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.343634 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" event={"ID":"381bae09-292e-47ab-a85b-eeec711acdd9","Type":"ContainerStarted","Data":"24eefe81e8715469a6d268edf7354dca96b3d7cb46067661d42653e51f19e9c9"} Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.343972 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.351924 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wz74v" event={"ID":"47c627d0-6fb9-4b77-b266-74670361fcd6","Type":"ContainerStarted","Data":"f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927"} Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.355855 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dzbt" event={"ID":"fe4edbcb-8a38-4f30-975f-aa4825192b4e","Type":"ContainerStarted","Data":"8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77"} Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.356127 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.356266 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" podUID="15bde054-692c-48d7-9289-37ba209fa899" containerName="route-controller-manager" containerID="cri-o://7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c" gracePeriod=30 Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.371926 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nrxb6" podStartSLOduration=2.47432703 podStartE2EDuration="41.371909604s" podCreationTimestamp="2026-02-18 11:39:07 +0000 UTC" firstStartedPulling="2026-02-18 11:39:08.926136785 +0000 UTC m=+150.653840865" lastFinishedPulling="2026-02-18 11:39:47.823719359 +0000 UTC m=+189.551423439" observedRunningTime="2026-02-18 11:39:48.371577306 +0000 UTC m=+190.099281386" watchObservedRunningTime="2026-02-18 11:39:48.371909604 +0000 UTC m=+190.099613684" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.388850 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wz74v" podStartSLOduration=2.681490798 podStartE2EDuration="38.388835453s" podCreationTimestamp="2026-02-18 11:39:10 +0000 UTC" firstStartedPulling="2026-02-18 11:39:12.044606967 +0000 UTC m=+153.772311047" lastFinishedPulling="2026-02-18 11:39:47.751951632 +0000 UTC m=+189.479655702" observedRunningTime="2026-02-18 11:39:48.38714703 +0000 UTC m=+190.114851110" watchObservedRunningTime="2026-02-18 11:39:48.388835453 +0000 UTC m=+190.116539533" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.449109 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" podStartSLOduration=1.449088028 podStartE2EDuration="1.449088028s" podCreationTimestamp="2026-02-18 11:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:48.443021984 +0000 UTC m=+190.170726064" watchObservedRunningTime="2026-02-18 11:39:48.449088028 +0000 UTC m=+190.176792108" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.452431 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7dzbt" podStartSLOduration=2.671793258 podStartE2EDuration="41.452414912s" podCreationTimestamp="2026-02-18 11:39:07 +0000 UTC" firstStartedPulling="2026-02-18 11:39:08.950178673 +0000 UTC m=+150.677882743" lastFinishedPulling="2026-02-18 11:39:47.730800317 +0000 UTC m=+189.458504397" observedRunningTime="2026-02-18 11:39:48.404855468 +0000 UTC m=+190.132559558" watchObservedRunningTime="2026-02-18 11:39:48.452414912 +0000 UTC m=+190.180118992" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.772756 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.808275 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz"] Feb 18 11:39:48 crc kubenswrapper[4922]: E0218 11:39:48.808559 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15bde054-692c-48d7-9289-37ba209fa899" containerName="route-controller-manager" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.808575 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="15bde054-692c-48d7-9289-37ba209fa899" containerName="route-controller-manager" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.808735 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="15bde054-692c-48d7-9289-37ba209fa899" containerName="route-controller-manager" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.809187 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.814332 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhdl\" (UniqueName: \"kubernetes.io/projected/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-kube-api-access-7mhdl\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.814421 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-config\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.814452 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-serving-cert\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.814486 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-client-ca\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.838527 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz"] Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.915404 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-config\") pod \"15bde054-692c-48d7-9289-37ba209fa899\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.915467 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15bde054-692c-48d7-9289-37ba209fa899-serving-cert\") pod \"15bde054-692c-48d7-9289-37ba209fa899\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.915775 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-client-ca\") pod \"15bde054-692c-48d7-9289-37ba209fa899\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.916027 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjhth\" (UniqueName: \"kubernetes.io/projected/15bde054-692c-48d7-9289-37ba209fa899-kube-api-access-tjhth\") pod \"15bde054-692c-48d7-9289-37ba209fa899\" (UID: \"15bde054-692c-48d7-9289-37ba209fa899\") " Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.916309 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-config\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.916380 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-serving-cert\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.916443 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-client-ca\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.916484 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-config" (OuterVolumeSpecName: "config") pod "15bde054-692c-48d7-9289-37ba209fa899" (UID: "15bde054-692c-48d7-9289-37ba209fa899"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.916639 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mhdl\" (UniqueName: \"kubernetes.io/projected/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-kube-api-access-7mhdl\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.916715 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.917410 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-config\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.917441 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-client-ca\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.917498 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-client-ca" (OuterVolumeSpecName: "client-ca") pod "15bde054-692c-48d7-9289-37ba209fa899" (UID: "15bde054-692c-48d7-9289-37ba209fa899"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.921730 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-serving-cert\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.926601 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15bde054-692c-48d7-9289-37ba209fa899-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "15bde054-692c-48d7-9289-37ba209fa899" (UID: "15bde054-692c-48d7-9289-37ba209fa899"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.926628 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15bde054-692c-48d7-9289-37ba209fa899-kube-api-access-tjhth" (OuterVolumeSpecName: "kube-api-access-tjhth") pod "15bde054-692c-48d7-9289-37ba209fa899" (UID: "15bde054-692c-48d7-9289-37ba209fa899"). InnerVolumeSpecName "kube-api-access-tjhth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:39:48 crc kubenswrapper[4922]: I0218 11:39:48.937847 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mhdl\" (UniqueName: \"kubernetes.io/projected/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-kube-api-access-7mhdl\") pod \"route-controller-manager-66bbd976d-c94kz\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.017147 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/15bde054-692c-48d7-9289-37ba209fa899-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.017178 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjhth\" (UniqueName: \"kubernetes.io/projected/15bde054-692c-48d7-9289-37ba209fa899-kube-api-access-tjhth\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.017202 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15bde054-692c-48d7-9289-37ba209fa899-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.128739 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.221561 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.222355 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.227531 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.228356 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.234519 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.320250 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8148f2a-3e96-4e61-8537-7cd39940a907-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8148f2a-3e96-4e61-8537-7cd39940a907\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.320311 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8148f2a-3e96-4e61-8537-7cd39940a907-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8148f2a-3e96-4e61-8537-7cd39940a907\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.368114 4922 generic.go:334] "Generic (PLEG): container finished" podID="15bde054-692c-48d7-9289-37ba209fa899" containerID="7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c" exitCode=0 Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.368785 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" event={"ID":"15bde054-692c-48d7-9289-37ba209fa899","Type":"ContainerDied","Data":"7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c"} Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.368816 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" event={"ID":"15bde054-692c-48d7-9289-37ba209fa899","Type":"ContainerDied","Data":"4ffa7a4229a3fad7ed4a84f18a413cfd9e9c021ec94c8917ef154f3e1f7c78a5"} Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.368833 4922 scope.go:117] "RemoveContainer" containerID="7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.369315 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.395126 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx"] Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.404727 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-rq7xx"] Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.406458 4922 scope.go:117] "RemoveContainer" containerID="7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c" Feb 18 11:39:49 crc kubenswrapper[4922]: E0218 11:39:49.407555 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c\": container with ID starting with 7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c not found: ID does not exist" containerID="7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.407587 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c"} err="failed to get container status \"7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c\": rpc error: code = NotFound desc = could not find container \"7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c\": container with ID starting with 7a881cfb4efbbdd30238b4a8820d29058b35729d117859298ac41459fbf8841c not found: ID does not exist" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.421697 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8148f2a-3e96-4e61-8537-7cd39940a907-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8148f2a-3e96-4e61-8537-7cd39940a907\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.421822 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8148f2a-3e96-4e61-8537-7cd39940a907-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8148f2a-3e96-4e61-8537-7cd39940a907\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.427007 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8148f2a-3e96-4e61-8537-7cd39940a907-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8148f2a-3e96-4e61-8537-7cd39940a907\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.450618 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8148f2a-3e96-4e61-8537-7cd39940a907-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8148f2a-3e96-4e61-8537-7cd39940a907\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.601451 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.614318 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz"] Feb 18 11:39:49 crc kubenswrapper[4922]: W0218 11:39:49.626548 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ed4eefa_9d69_468e_b783_af3d0a1e7e75.slice/crio-689bf74da1dea7687a9978545b709d3db28e7791ae16a26226addae8a1619d61 WatchSource:0}: Error finding container 689bf74da1dea7687a9978545b709d3db28e7791ae16a26226addae8a1619d61: Status 404 returned error can't find the container with id 689bf74da1dea7687a9978545b709d3db28e7791ae16a26226addae8a1619d61 Feb 18 11:39:49 crc kubenswrapper[4922]: I0218 11:39:49.878906 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.395021 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f8148f2a-3e96-4e61-8537-7cd39940a907","Type":"ContainerStarted","Data":"c96d9c5829843ccf1859da54d4fbe26e00d3587d1857a6d9339eeb6676859e82"} Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.396396 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" event={"ID":"1ed4eefa-9d69-468e-b783-af3d0a1e7e75","Type":"ContainerStarted","Data":"e286f860b267a30576d94c00bd4a0c76c12231f84c1094cab4113f22c54770c4"} Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.396420 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" event={"ID":"1ed4eefa-9d69-468e-b783-af3d0a1e7e75","Type":"ContainerStarted","Data":"689bf74da1dea7687a9978545b709d3db28e7791ae16a26226addae8a1619d61"} Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.398345 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.402832 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.436287 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" podStartSLOduration=3.436271676 podStartE2EDuration="3.436271676s" podCreationTimestamp="2026-02-18 11:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:50.418746192 +0000 UTC m=+192.146450272" watchObservedRunningTime="2026-02-18 11:39:50.436271676 +0000 UTC m=+192.163975756" Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.606860 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.607246 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:39:50 crc kubenswrapper[4922]: I0218 11:39:50.984665 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15bde054-692c-48d7-9289-37ba209fa899" path="/var/lib/kubelet/pods/15bde054-692c-48d7-9289-37ba209fa899/volumes" Feb 18 11:39:51 crc kubenswrapper[4922]: I0218 11:39:51.406713 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f8148f2a-3e96-4e61-8537-7cd39940a907","Type":"ContainerStarted","Data":"8781754afcfa0017e16623396079b0583edf117901c99e1ded8fe44b7e8ca6d4"} Feb 18 11:39:52 crc kubenswrapper[4922]: I0218 11:39:52.059297 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wz74v" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="registry-server" probeResult="failure" output=< Feb 18 11:39:52 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 11:39:52 crc kubenswrapper[4922]: > Feb 18 11:39:52 crc kubenswrapper[4922]: I0218 11:39:52.415511 4922 generic.go:334] "Generic (PLEG): container finished" podID="f8148f2a-3e96-4e61-8537-7cd39940a907" containerID="8781754afcfa0017e16623396079b0583edf117901c99e1ded8fe44b7e8ca6d4" exitCode=0 Feb 18 11:39:52 crc kubenswrapper[4922]: I0218 11:39:52.415603 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f8148f2a-3e96-4e61-8537-7cd39940a907","Type":"ContainerDied","Data":"8781754afcfa0017e16623396079b0583edf117901c99e1ded8fe44b7e8ca6d4"} Feb 18 11:39:53 crc kubenswrapper[4922]: I0218 11:39:53.719374 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:53 crc kubenswrapper[4922]: I0218 11:39:53.880328 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8148f2a-3e96-4e61-8537-7cd39940a907-kube-api-access\") pod \"f8148f2a-3e96-4e61-8537-7cd39940a907\" (UID: \"f8148f2a-3e96-4e61-8537-7cd39940a907\") " Feb 18 11:39:53 crc kubenswrapper[4922]: I0218 11:39:53.880407 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8148f2a-3e96-4e61-8537-7cd39940a907-kubelet-dir\") pod \"f8148f2a-3e96-4e61-8537-7cd39940a907\" (UID: \"f8148f2a-3e96-4e61-8537-7cd39940a907\") " Feb 18 11:39:53 crc kubenswrapper[4922]: I0218 11:39:53.880543 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8148f2a-3e96-4e61-8537-7cd39940a907-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f8148f2a-3e96-4e61-8537-7cd39940a907" (UID: "f8148f2a-3e96-4e61-8537-7cd39940a907"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:39:53 crc kubenswrapper[4922]: I0218 11:39:53.880843 4922 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8148f2a-3e96-4e61-8537-7cd39940a907-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:53 crc kubenswrapper[4922]: I0218 11:39:53.887226 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8148f2a-3e96-4e61-8537-7cd39940a907-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f8148f2a-3e96-4e61-8537-7cd39940a907" (UID: "f8148f2a-3e96-4e61-8537-7cd39940a907"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:39:53 crc kubenswrapper[4922]: I0218 11:39:53.981891 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8148f2a-3e96-4e61-8537-7cd39940a907-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.014059 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 11:39:54 crc kubenswrapper[4922]: E0218 11:39:54.014341 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8148f2a-3e96-4e61-8537-7cd39940a907" containerName="pruner" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.014375 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8148f2a-3e96-4e61-8537-7cd39940a907" containerName="pruner" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.014516 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8148f2a-3e96-4e61-8537-7cd39940a907" containerName="pruner" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.014997 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.032537 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.184074 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kube-api-access\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.184157 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.184313 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-var-lock\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.285862 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kube-api-access\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.286012 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.286043 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-var-lock\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.286141 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-var-lock\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.286157 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.302346 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kube-api-access\") pod \"installer-9-crc\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.332566 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.431419 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f8148f2a-3e96-4e61-8537-7cd39940a907","Type":"ContainerDied","Data":"c96d9c5829843ccf1859da54d4fbe26e00d3587d1857a6d9339eeb6676859e82"} Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.431454 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96d9c5829843ccf1859da54d4fbe26e00d3587d1857a6d9339eeb6676859e82" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.431501 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:39:54 crc kubenswrapper[4922]: I0218 11:39:54.813117 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 11:39:55 crc kubenswrapper[4922]: I0218 11:39:55.437353 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ad37f26c-293d-42c8-a88e-21e0a2c5e05d","Type":"ContainerStarted","Data":"0f5aa83e132da84ebb77b9c8d8371acd88a3509bc96c992baa1b131822fe3971"} Feb 18 11:39:55 crc kubenswrapper[4922]: I0218 11:39:55.437420 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ad37f26c-293d-42c8-a88e-21e0a2c5e05d","Type":"ContainerStarted","Data":"2993e8b81cf0d5924b6da2a78a590d591a1692ddbff65fb9a4fa27002842c2e5"} Feb 18 11:39:55 crc kubenswrapper[4922]: I0218 11:39:55.452282 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.452261868 podStartE2EDuration="1.452261868s" podCreationTimestamp="2026-02-18 11:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:39:55.449703463 +0000 UTC m=+197.177407543" watchObservedRunningTime="2026-02-18 11:39:55.452261868 +0000 UTC m=+197.179965948" Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.384478 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.384759 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.439413 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.448149 4922 generic.go:334] "Generic (PLEG): container finished" podID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerID="88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b" exitCode=0 Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.448213 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg8w6" event={"ID":"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54","Type":"ContainerDied","Data":"88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b"} Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.493012 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.921538 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.921596 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:57 crc kubenswrapper[4922]: I0218 11:39:57.999389 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:58 crc kubenswrapper[4922]: I0218 11:39:58.497248 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:39:59 crc kubenswrapper[4922]: I0218 11:39:59.203815 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrxb6"] Feb 18 11:40:00 crc kubenswrapper[4922]: I0218 11:40:00.464186 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nrxb6" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerName="registry-server" containerID="cri-o://ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd" gracePeriod=2 Feb 18 11:40:00 crc kubenswrapper[4922]: I0218 11:40:00.659584 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:40:00 crc kubenswrapper[4922]: I0218 11:40:00.698481 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:40:00 crc kubenswrapper[4922]: E0218 11:40:00.700013 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48ccc67a_6393_4dea_9c00_24bbc55e34d3.slice/crio-ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd.scope\": RecentStats: unable to find data in memory cache]" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.400632 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.480250 4922 generic.go:334] "Generic (PLEG): container finished" podID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerID="ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd" exitCode=0 Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.480326 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrxb6" event={"ID":"48ccc67a-6393-4dea-9c00-24bbc55e34d3","Type":"ContainerDied","Data":"ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd"} Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.480376 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrxb6" event={"ID":"48ccc67a-6393-4dea-9c00-24bbc55e34d3","Type":"ContainerDied","Data":"cf75383b2030608656ce7bf5f480ccca8d0511a948c0bdf822601c66ee3563e2"} Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.480401 4922 scope.go:117] "RemoveContainer" containerID="ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.480697 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrxb6" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.502637 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-utilities\") pod \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.502685 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tq9z\" (UniqueName: \"kubernetes.io/projected/48ccc67a-6393-4dea-9c00-24bbc55e34d3-kube-api-access-9tq9z\") pod \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.503838 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-utilities" (OuterVolumeSpecName: "utilities") pod "48ccc67a-6393-4dea-9c00-24bbc55e34d3" (UID: "48ccc67a-6393-4dea-9c00-24bbc55e34d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.508731 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ccc67a-6393-4dea-9c00-24bbc55e34d3-kube-api-access-9tq9z" (OuterVolumeSpecName: "kube-api-access-9tq9z") pod "48ccc67a-6393-4dea-9c00-24bbc55e34d3" (UID: "48ccc67a-6393-4dea-9c00-24bbc55e34d3"). InnerVolumeSpecName "kube-api-access-9tq9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.567790 4922 scope.go:117] "RemoveContainer" containerID="f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.603420 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-catalog-content\") pod \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\" (UID: \"48ccc67a-6393-4dea-9c00-24bbc55e34d3\") " Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.603756 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.603774 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tq9z\" (UniqueName: \"kubernetes.io/projected/48ccc67a-6393-4dea-9c00-24bbc55e34d3-kube-api-access-9tq9z\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.655242 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48ccc67a-6393-4dea-9c00-24bbc55e34d3" (UID: "48ccc67a-6393-4dea-9c00-24bbc55e34d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.708234 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ccc67a-6393-4dea-9c00-24bbc55e34d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.710806 4922 scope.go:117] "RemoveContainer" containerID="26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.829839 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrxb6"] Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.832555 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nrxb6"] Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.915506 4922 scope.go:117] "RemoveContainer" containerID="ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd" Feb 18 11:40:01 crc kubenswrapper[4922]: E0218 11:40:01.915921 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd\": container with ID starting with ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd not found: ID does not exist" containerID="ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.915955 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd"} err="failed to get container status \"ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd\": rpc error: code = NotFound desc = could not find container \"ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd\": container with ID starting with ade67fc75daaf75a5b69f90fb93842c53cb85bb22999b13e9dd0f075f8b6c4bd not found: ID does not exist" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.915982 4922 scope.go:117] "RemoveContainer" containerID="f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1" Feb 18 11:40:01 crc kubenswrapper[4922]: E0218 11:40:01.921224 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1\": container with ID starting with f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1 not found: ID does not exist" containerID="f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.921263 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1"} err="failed to get container status \"f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1\": rpc error: code = NotFound desc = could not find container \"f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1\": container with ID starting with f96343f5641693f6e288bbd4accc54022881c3b1914c16bca3dd70533cbcb9f1 not found: ID does not exist" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.921305 4922 scope.go:117] "RemoveContainer" containerID="26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16" Feb 18 11:40:01 crc kubenswrapper[4922]: E0218 11:40:01.921705 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16\": container with ID starting with 26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16 not found: ID does not exist" containerID="26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16" Feb 18 11:40:01 crc kubenswrapper[4922]: I0218 11:40:01.921727 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16"} err="failed to get container status \"26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16\": rpc error: code = NotFound desc = could not find container \"26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16\": container with ID starting with 26033474e45a010781ba836c11c8f393e1aeb5595fb8120441fc214ed20d9a16 not found: ID does not exist" Feb 18 11:40:02 crc kubenswrapper[4922]: I0218 11:40:02.493999 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vjsn" event={"ID":"bf0d2342-e758-43cc-8c89-adc3ceb98453","Type":"ContainerStarted","Data":"bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd"} Feb 18 11:40:02 crc kubenswrapper[4922]: I0218 11:40:02.497220 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg8w6" event={"ID":"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54","Type":"ContainerStarted","Data":"b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e"} Feb 18 11:40:02 crc kubenswrapper[4922]: I0218 11:40:02.985604 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" path="/var/lib/kubelet/pods/48ccc67a-6393-4dea-9c00-24bbc55e34d3/volumes" Feb 18 11:40:03 crc kubenswrapper[4922]: I0218 11:40:03.505321 4922 generic.go:334] "Generic (PLEG): container finished" podID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerID="bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd" exitCode=0 Feb 18 11:40:03 crc kubenswrapper[4922]: I0218 11:40:03.505459 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vjsn" event={"ID":"bf0d2342-e758-43cc-8c89-adc3ceb98453","Type":"ContainerDied","Data":"bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd"} Feb 18 11:40:03 crc kubenswrapper[4922]: I0218 11:40:03.508313 4922 generic.go:334] "Generic (PLEG): container finished" podID="d0a05956-6087-461d-a271-52db98c6032a" containerID="647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8" exitCode=0 Feb 18 11:40:03 crc kubenswrapper[4922]: I0218 11:40:03.508395 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf4nk" event={"ID":"d0a05956-6087-461d-a271-52db98c6032a","Type":"ContainerDied","Data":"647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8"} Feb 18 11:40:03 crc kubenswrapper[4922]: I0218 11:40:03.525939 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wg8w6" podStartSLOduration=4.3414163 podStartE2EDuration="53.525919109s" podCreationTimestamp="2026-02-18 11:39:10 +0000 UTC" firstStartedPulling="2026-02-18 11:39:12.049587493 +0000 UTC m=+153.777291573" lastFinishedPulling="2026-02-18 11:40:01.234090302 +0000 UTC m=+202.961794382" observedRunningTime="2026-02-18 11:40:02.514264968 +0000 UTC m=+204.241969048" watchObservedRunningTime="2026-02-18 11:40:03.525919109 +0000 UTC m=+205.253623189" Feb 18 11:40:07 crc kubenswrapper[4922]: I0218 11:40:07.205241 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588bb8688f-kgnf7"] Feb 18 11:40:07 crc kubenswrapper[4922]: I0218 11:40:07.205801 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" podUID="381bae09-292e-47ab-a85b-eeec711acdd9" containerName="controller-manager" containerID="cri-o://888942b059f1b7ce4402d4d451bbe5ffef2dc01e980cc850af5d6d975e6db9a0" gracePeriod=30 Feb 18 11:40:07 crc kubenswrapper[4922]: I0218 11:40:07.258893 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz"] Feb 18 11:40:07 crc kubenswrapper[4922]: I0218 11:40:07.259457 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" podUID="1ed4eefa-9d69-468e-b783-af3d0a1e7e75" containerName="route-controller-manager" containerID="cri-o://e286f860b267a30576d94c00bd4a0c76c12231f84c1094cab4113f22c54770c4" gracePeriod=30 Feb 18 11:40:07 crc kubenswrapper[4922]: I0218 11:40:07.521949 4922 patch_prober.go:28] interesting pod/controller-manager-588bb8688f-kgnf7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 18 11:40:07 crc kubenswrapper[4922]: I0218 11:40:07.522015 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" podUID="381bae09-292e-47ab-a85b-eeec711acdd9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 18 11:40:08 crc kubenswrapper[4922]: I0218 11:40:08.546644 4922 generic.go:334] "Generic (PLEG): container finished" podID="381bae09-292e-47ab-a85b-eeec711acdd9" containerID="888942b059f1b7ce4402d4d451bbe5ffef2dc01e980cc850af5d6d975e6db9a0" exitCode=0 Feb 18 11:40:08 crc kubenswrapper[4922]: I0218 11:40:08.546935 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" event={"ID":"381bae09-292e-47ab-a85b-eeec711acdd9","Type":"ContainerDied","Data":"888942b059f1b7ce4402d4d451bbe5ffef2dc01e980cc850af5d6d975e6db9a0"} Feb 18 11:40:08 crc kubenswrapper[4922]: I0218 11:40:08.548724 4922 generic.go:334] "Generic (PLEG): container finished" podID="1ed4eefa-9d69-468e-b783-af3d0a1e7e75" containerID="e286f860b267a30576d94c00bd4a0c76c12231f84c1094cab4113f22c54770c4" exitCode=0 Feb 18 11:40:08 crc kubenswrapper[4922]: I0218 11:40:08.548813 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" event={"ID":"1ed4eefa-9d69-468e-b783-af3d0a1e7e75","Type":"ContainerDied","Data":"e286f860b267a30576d94c00bd4a0c76c12231f84c1094cab4113f22c54770c4"} Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.128573 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.167479 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc"] Feb 18 11:40:09 crc kubenswrapper[4922]: E0218 11:40:09.167657 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerName="registry-server" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.167675 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerName="registry-server" Feb 18 11:40:09 crc kubenswrapper[4922]: E0218 11:40:09.167689 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerName="extract-content" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.167695 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerName="extract-content" Feb 18 11:40:09 crc kubenswrapper[4922]: E0218 11:40:09.167703 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerName="extract-utilities" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.167709 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerName="extract-utilities" Feb 18 11:40:09 crc kubenswrapper[4922]: E0218 11:40:09.167722 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed4eefa-9d69-468e-b783-af3d0a1e7e75" containerName="route-controller-manager" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.167728 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed4eefa-9d69-468e-b783-af3d0a1e7e75" containerName="route-controller-manager" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.167812 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed4eefa-9d69-468e-b783-af3d0a1e7e75" containerName="route-controller-manager" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.167822 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ccc67a-6393-4dea-9c00-24bbc55e34d3" containerName="registry-server" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.168168 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.175481 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.194159 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc"] Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310522 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqh2b\" (UniqueName: \"kubernetes.io/projected/381bae09-292e-47ab-a85b-eeec711acdd9-kube-api-access-dqh2b\") pod \"381bae09-292e-47ab-a85b-eeec711acdd9\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310573 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mhdl\" (UniqueName: \"kubernetes.io/projected/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-kube-api-access-7mhdl\") pod \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310602 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-config\") pod \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310640 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-proxy-ca-bundles\") pod \"381bae09-292e-47ab-a85b-eeec711acdd9\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310664 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-client-ca\") pod \"381bae09-292e-47ab-a85b-eeec711acdd9\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310680 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-client-ca\") pod \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310737 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-serving-cert\") pod \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\" (UID: \"1ed4eefa-9d69-468e-b783-af3d0a1e7e75\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310754 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/381bae09-292e-47ab-a85b-eeec711acdd9-serving-cert\") pod \"381bae09-292e-47ab-a85b-eeec711acdd9\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310773 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-config\") pod \"381bae09-292e-47ab-a85b-eeec711acdd9\" (UID: \"381bae09-292e-47ab-a85b-eeec711acdd9\") " Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310936 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-config\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.310974 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-serving-cert\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.311001 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdp7v\" (UniqueName: \"kubernetes.io/projected/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-kube-api-access-bdp7v\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.311049 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-client-ca\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.316102 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381bae09-292e-47ab-a85b-eeec711acdd9-kube-api-access-dqh2b" (OuterVolumeSpecName: "kube-api-access-dqh2b") pod "381bae09-292e-47ab-a85b-eeec711acdd9" (UID: "381bae09-292e-47ab-a85b-eeec711acdd9"). InnerVolumeSpecName "kube-api-access-dqh2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.316181 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-kube-api-access-7mhdl" (OuterVolumeSpecName: "kube-api-access-7mhdl") pod "1ed4eefa-9d69-468e-b783-af3d0a1e7e75" (UID: "1ed4eefa-9d69-468e-b783-af3d0a1e7e75"). InnerVolumeSpecName "kube-api-access-7mhdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.316584 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "381bae09-292e-47ab-a85b-eeec711acdd9" (UID: "381bae09-292e-47ab-a85b-eeec711acdd9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.316599 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-client-ca" (OuterVolumeSpecName: "client-ca") pod "381bae09-292e-47ab-a85b-eeec711acdd9" (UID: "381bae09-292e-47ab-a85b-eeec711acdd9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.317063 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-client-ca" (OuterVolumeSpecName: "client-ca") pod "1ed4eefa-9d69-468e-b783-af3d0a1e7e75" (UID: "1ed4eefa-9d69-468e-b783-af3d0a1e7e75"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.317189 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-config" (OuterVolumeSpecName: "config") pod "1ed4eefa-9d69-468e-b783-af3d0a1e7e75" (UID: "1ed4eefa-9d69-468e-b783-af3d0a1e7e75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.317220 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-config" (OuterVolumeSpecName: "config") pod "381bae09-292e-47ab-a85b-eeec711acdd9" (UID: "381bae09-292e-47ab-a85b-eeec711acdd9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.318773 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381bae09-292e-47ab-a85b-eeec711acdd9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "381bae09-292e-47ab-a85b-eeec711acdd9" (UID: "381bae09-292e-47ab-a85b-eeec711acdd9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.319249 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ed4eefa-9d69-468e-b783-af3d0a1e7e75" (UID: "1ed4eefa-9d69-468e-b783-af3d0a1e7e75"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412439 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-client-ca\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412519 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-config\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412567 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-serving-cert\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412609 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdp7v\" (UniqueName: \"kubernetes.io/projected/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-kube-api-access-bdp7v\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412674 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412688 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412697 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412706 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/381bae09-292e-47ab-a85b-eeec711acdd9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412714 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412723 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/381bae09-292e-47ab-a85b-eeec711acdd9-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412733 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqh2b\" (UniqueName: \"kubernetes.io/projected/381bae09-292e-47ab-a85b-eeec711acdd9-kube-api-access-dqh2b\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412744 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mhdl\" (UniqueName: \"kubernetes.io/projected/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-kube-api-access-7mhdl\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.412753 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ed4eefa-9d69-468e-b783-af3d0a1e7e75-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.413813 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-config\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.414185 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-client-ca\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.417948 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-serving-cert\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.433706 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdp7v\" (UniqueName: \"kubernetes.io/projected/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-kube-api-access-bdp7v\") pod \"route-controller-manager-c5f5b6bc4-4xglc\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.493760 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.555242 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" event={"ID":"381bae09-292e-47ab-a85b-eeec711acdd9","Type":"ContainerDied","Data":"24eefe81e8715469a6d268edf7354dca96b3d7cb46067661d42653e51f19e9c9"} Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.555323 4922 scope.go:117] "RemoveContainer" containerID="888942b059f1b7ce4402d4d451bbe5ffef2dc01e980cc850af5d6d975e6db9a0" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.555269 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588bb8688f-kgnf7" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.557736 4922 generic.go:334] "Generic (PLEG): container finished" podID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerID="e63e35cde0feed6b8593caa8621cdbf494c204e6b29cfbb54918e219b2da13b4" exitCode=0 Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.557923 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcnjk" event={"ID":"fc9b41f8-ac9b-4166-a2a6-80326e19254a","Type":"ContainerDied","Data":"e63e35cde0feed6b8593caa8621cdbf494c204e6b29cfbb54918e219b2da13b4"} Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.565169 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vjsn" event={"ID":"bf0d2342-e758-43cc-8c89-adc3ceb98453","Type":"ContainerStarted","Data":"7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178"} Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.573397 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf4nk" event={"ID":"d0a05956-6087-461d-a271-52db98c6032a","Type":"ContainerStarted","Data":"f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b"} Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.577634 4922 generic.go:334] "Generic (PLEG): container finished" podID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerID="0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181" exitCode=0 Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.577709 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5lflw" event={"ID":"9cddee0a-8b13-429b-89b6-e820f8f3ec59","Type":"ContainerDied","Data":"0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181"} Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.583582 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" event={"ID":"1ed4eefa-9d69-468e-b783-af3d0a1e7e75","Type":"ContainerDied","Data":"689bf74da1dea7687a9978545b709d3db28e7791ae16a26226addae8a1619d61"} Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.583642 4922 scope.go:117] "RemoveContainer" containerID="e286f860b267a30576d94c00bd4a0c76c12231f84c1094cab4113f22c54770c4" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.583790 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.626167 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5vjsn" podStartSLOduration=2.845171413 podStartE2EDuration="1m1.626147721s" podCreationTimestamp="2026-02-18 11:39:08 +0000 UTC" firstStartedPulling="2026-02-18 11:39:09.998674541 +0000 UTC m=+151.726378621" lastFinishedPulling="2026-02-18 11:40:08.779650859 +0000 UTC m=+210.507354929" observedRunningTime="2026-02-18 11:40:09.622722919 +0000 UTC m=+211.350426999" watchObservedRunningTime="2026-02-18 11:40:09.626147721 +0000 UTC m=+211.353851811" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.648899 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588bb8688f-kgnf7"] Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.663227 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-588bb8688f-kgnf7"] Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.663327 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.663392 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.672352 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nf4nk" podStartSLOduration=2.926752521 podStartE2EDuration="1m0.672329261s" podCreationTimestamp="2026-02-18 11:39:09 +0000 UTC" firstStartedPulling="2026-02-18 11:39:11.031483733 +0000 UTC m=+152.759187813" lastFinishedPulling="2026-02-18 11:40:08.777060473 +0000 UTC m=+210.504764553" observedRunningTime="2026-02-18 11:40:09.655291725 +0000 UTC m=+211.382995795" watchObservedRunningTime="2026-02-18 11:40:09.672329261 +0000 UTC m=+211.400033351" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.677489 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz"] Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.682487 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bbd976d-c94kz"] Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.761601 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc"] Feb 18 11:40:09 crc kubenswrapper[4922]: W0218 11:40:09.767688 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33610a6b_c93e_4578_b8d9_93f5a0dbe1a4.slice/crio-45dfbf68b12565a4b195c299a902d800748f462bcdfa1500e5b1e3f3b967157e WatchSource:0}: Error finding container 45dfbf68b12565a4b195c299a902d800748f462bcdfa1500e5b1e3f3b967157e: Status 404 returned error can't find the container with id 45dfbf68b12565a4b195c299a902d800748f462bcdfa1500e5b1e3f3b967157e Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.807935 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.808006 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.808060 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.808773 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 11:40:09 crc kubenswrapper[4922]: I0218 11:40:09.808830 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b" gracePeriod=600 Feb 18 11:40:10 crc kubenswrapper[4922]: I0218 11:40:10.600795 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" event={"ID":"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4","Type":"ContainerStarted","Data":"bebc46c7f9271b765da6413fad509e6ec5685b8951728d70348b293df49ce847"} Feb 18 11:40:10 crc kubenswrapper[4922]: I0218 11:40:10.600839 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" event={"ID":"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4","Type":"ContainerStarted","Data":"45dfbf68b12565a4b195c299a902d800748f462bcdfa1500e5b1e3f3b967157e"} Feb 18 11:40:10 crc kubenswrapper[4922]: I0218 11:40:10.726995 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nf4nk" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="registry-server" probeResult="failure" output=< Feb 18 11:40:10 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 11:40:10 crc kubenswrapper[4922]: > Feb 18 11:40:10 crc kubenswrapper[4922]: I0218 11:40:10.981526 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed4eefa-9d69-468e-b783-af3d0a1e7e75" path="/var/lib/kubelet/pods/1ed4eefa-9d69-468e-b783-af3d0a1e7e75/volumes" Feb 18 11:40:10 crc kubenswrapper[4922]: I0218 11:40:10.982770 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="381bae09-292e-47ab-a85b-eeec711acdd9" path="/var/lib/kubelet/pods/381bae09-292e-47ab-a85b-eeec711acdd9/volumes" Feb 18 11:40:10 crc kubenswrapper[4922]: I0218 11:40:10.994791 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:40:10 crc kubenswrapper[4922]: I0218 11:40:10.994999 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.039574 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.611804 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b" exitCode=0 Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.611932 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b"} Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.612213 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"f1a0cd1a4059d80457c2ea470bb511d4a674a1cbfe4b38356d0636a94629e11f"} Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.619174 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcnjk" event={"ID":"fc9b41f8-ac9b-4166-a2a6-80326e19254a","Type":"ContainerStarted","Data":"99eb33433ccb75dafa75d7dcda595a86756a6a3d18a2403102396dc29b8e05f3"} Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.623676 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.627827 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.656607 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" podStartSLOduration=4.656579463 podStartE2EDuration="4.656579463s" podCreationTimestamp="2026-02-18 11:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:40:11.654567153 +0000 UTC m=+213.382271243" watchObservedRunningTime="2026-02-18 11:40:11.656579463 +0000 UTC m=+213.384283543" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.686207 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lcnjk" podStartSLOduration=2.206283972 podStartE2EDuration="1m4.686189241s" podCreationTimestamp="2026-02-18 11:39:07 +0000 UTC" firstStartedPulling="2026-02-18 11:39:08.91292146 +0000 UTC m=+150.640625540" lastFinishedPulling="2026-02-18 11:40:11.392826729 +0000 UTC m=+213.120530809" observedRunningTime="2026-02-18 11:40:11.684915364 +0000 UTC m=+213.412619454" watchObservedRunningTime="2026-02-18 11:40:11.686189241 +0000 UTC m=+213.413893321" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.689629 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.971437 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cc84f956-ltmd7"] Feb 18 11:40:11 crc kubenswrapper[4922]: E0218 11:40:11.972061 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381bae09-292e-47ab-a85b-eeec711acdd9" containerName="controller-manager" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.972081 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="381bae09-292e-47ab-a85b-eeec711acdd9" containerName="controller-manager" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.972207 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="381bae09-292e-47ab-a85b-eeec711acdd9" containerName="controller-manager" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.972943 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.975065 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.975112 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.975628 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.975775 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.977828 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.978322 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.989197 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 11:40:11 crc kubenswrapper[4922]: I0218 11:40:11.995999 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc84f956-ltmd7"] Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.054302 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72ad78b5-bc12-488f-aab9-869895d67ce8-serving-cert\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.054369 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-proxy-ca-bundles\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.054441 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-config\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.054482 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjcrd\" (UniqueName: \"kubernetes.io/projected/72ad78b5-bc12-488f-aab9-869895d67ce8-kube-api-access-kjcrd\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.054533 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-client-ca\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.155524 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72ad78b5-bc12-488f-aab9-869895d67ce8-serving-cert\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.155856 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-proxy-ca-bundles\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.155969 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-config\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.156104 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjcrd\" (UniqueName: \"kubernetes.io/projected/72ad78b5-bc12-488f-aab9-869895d67ce8-kube-api-access-kjcrd\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.157085 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-proxy-ca-bundles\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.157201 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-config\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.157294 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-client-ca\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.157845 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-client-ca\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.162816 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72ad78b5-bc12-488f-aab9-869895d67ce8-serving-cert\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.179577 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjcrd\" (UniqueName: \"kubernetes.io/projected/72ad78b5-bc12-488f-aab9-869895d67ce8-kube-api-access-kjcrd\") pod \"controller-manager-5cc84f956-ltmd7\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.288344 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.500944 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc84f956-ltmd7"] Feb 18 11:40:12 crc kubenswrapper[4922]: W0218 11:40:12.506208 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72ad78b5_bc12_488f_aab9_869895d67ce8.slice/crio-9ec1524a75d8bd2f1625be98e138e6cf305c4ac3d285301d8e45c5b5ccd89829 WatchSource:0}: Error finding container 9ec1524a75d8bd2f1625be98e138e6cf305c4ac3d285301d8e45c5b5ccd89829: Status 404 returned error can't find the container with id 9ec1524a75d8bd2f1625be98e138e6cf305c4ac3d285301d8e45c5b5ccd89829 Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.631524 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5lflw" event={"ID":"9cddee0a-8b13-429b-89b6-e820f8f3ec59","Type":"ContainerStarted","Data":"3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b"} Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.633061 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" event={"ID":"72ad78b5-bc12-488f-aab9-869895d67ce8","Type":"ContainerStarted","Data":"9ec1524a75d8bd2f1625be98e138e6cf305c4ac3d285301d8e45c5b5ccd89829"} Feb 18 11:40:12 crc kubenswrapper[4922]: I0218 11:40:12.655433 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5lflw" podStartSLOduration=4.137952085 podStartE2EDuration="1m6.655408254s" podCreationTimestamp="2026-02-18 11:39:06 +0000 UTC" firstStartedPulling="2026-02-18 11:39:08.942073978 +0000 UTC m=+150.669778058" lastFinishedPulling="2026-02-18 11:40:11.459530137 +0000 UTC m=+213.187234227" observedRunningTime="2026-02-18 11:40:12.651866838 +0000 UTC m=+214.379570918" watchObservedRunningTime="2026-02-18 11:40:12.655408254 +0000 UTC m=+214.383112364" Feb 18 11:40:13 crc kubenswrapper[4922]: I0218 11:40:13.600443 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg8w6"] Feb 18 11:40:13 crc kubenswrapper[4922]: I0218 11:40:13.640471 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" event={"ID":"72ad78b5-bc12-488f-aab9-869895d67ce8","Type":"ContainerStarted","Data":"c09c49cb94f646993c2a4b0fda6fbdd7f540489b4fa899ee2bacffa6390fbbe3"} Feb 18 11:40:13 crc kubenswrapper[4922]: I0218 11:40:13.663830 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" podStartSLOduration=6.6638067979999995 podStartE2EDuration="6.663806798s" podCreationTimestamp="2026-02-18 11:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:40:13.661024195 +0000 UTC m=+215.388728295" watchObservedRunningTime="2026-02-18 11:40:13.663806798 +0000 UTC m=+215.391510888" Feb 18 11:40:14 crc kubenswrapper[4922]: I0218 11:40:14.646972 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:14 crc kubenswrapper[4922]: I0218 11:40:14.646988 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wg8w6" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerName="registry-server" containerID="cri-o://b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e" gracePeriod=2 Feb 18 11:40:14 crc kubenswrapper[4922]: I0218 11:40:14.653757 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.376979 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.429115 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-catalog-content\") pod \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.429176 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c89j\" (UniqueName: \"kubernetes.io/projected/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-kube-api-access-8c89j\") pod \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.429257 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-utilities\") pod \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\" (UID: \"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54\") " Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.430102 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-utilities" (OuterVolumeSpecName: "utilities") pod "80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" (UID: "80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.435072 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-kube-api-access-8c89j" (OuterVolumeSpecName: "kube-api-access-8c89j") pod "80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" (UID: "80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54"). InnerVolumeSpecName "kube-api-access-8c89j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.531385 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.531441 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c89j\" (UniqueName: \"kubernetes.io/projected/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-kube-api-access-8c89j\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.553830 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" (UID: "80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.632706 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.659637 4922 generic.go:334] "Generic (PLEG): container finished" podID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerID="b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e" exitCode=0 Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.659709 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg8w6" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.659733 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg8w6" event={"ID":"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54","Type":"ContainerDied","Data":"b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e"} Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.659777 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg8w6" event={"ID":"80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54","Type":"ContainerDied","Data":"25d2b5e0e4b5a6b68be1bcb304a04f4b6084a6c0bdb0404fc5532afe958fbbd6"} Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.659796 4922 scope.go:117] "RemoveContainer" containerID="b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.677034 4922 scope.go:117] "RemoveContainer" containerID="88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.695692 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg8w6"] Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.698904 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wg8w6"] Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.707650 4922 scope.go:117] "RemoveContainer" containerID="8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.729177 4922 scope.go:117] "RemoveContainer" containerID="b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e" Feb 18 11:40:16 crc kubenswrapper[4922]: E0218 11:40:16.730243 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e\": container with ID starting with b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e not found: ID does not exist" containerID="b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.730308 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e"} err="failed to get container status \"b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e\": rpc error: code = NotFound desc = could not find container \"b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e\": container with ID starting with b5153992a200b7e1d462bdf29a1a95548cb5c21776d32e5caa2c14598b127e2e not found: ID does not exist" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.730342 4922 scope.go:117] "RemoveContainer" containerID="88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b" Feb 18 11:40:16 crc kubenswrapper[4922]: E0218 11:40:16.730694 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b\": container with ID starting with 88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b not found: ID does not exist" containerID="88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.731032 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b"} err="failed to get container status \"88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b\": rpc error: code = NotFound desc = could not find container \"88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b\": container with ID starting with 88bed4d626019726ebb91e748414ebf1fc49f67cf72a95e0e09268ee83ee0d6b not found: ID does not exist" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.731071 4922 scope.go:117] "RemoveContainer" containerID="8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777" Feb 18 11:40:16 crc kubenswrapper[4922]: E0218 11:40:16.731354 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777\": container with ID starting with 8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777 not found: ID does not exist" containerID="8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.731402 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777"} err="failed to get container status \"8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777\": rpc error: code = NotFound desc = could not find container \"8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777\": container with ID starting with 8dd3aff6ccf90bcc5b2f3c4b1fa26fee9db4eb20a75dd95c5b684cb6dd48e777 not found: ID does not exist" Feb 18 11:40:16 crc kubenswrapper[4922]: I0218 11:40:16.982600 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" path="/var/lib/kubelet/pods/80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54/volumes" Feb 18 11:40:17 crc kubenswrapper[4922]: I0218 11:40:17.252886 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:40:17 crc kubenswrapper[4922]: I0218 11:40:17.253165 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:40:17 crc kubenswrapper[4922]: I0218 11:40:17.309979 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:40:17 crc kubenswrapper[4922]: I0218 11:40:17.629119 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:40:17 crc kubenswrapper[4922]: I0218 11:40:17.629525 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:40:17 crc kubenswrapper[4922]: I0218 11:40:17.675022 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:40:17 crc kubenswrapper[4922]: I0218 11:40:17.716072 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:40:17 crc kubenswrapper[4922]: I0218 11:40:17.726601 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.007868 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lcnjk"] Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.199478 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.199664 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.255563 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.685736 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lcnjk" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerName="registry-server" containerID="cri-o://99eb33433ccb75dafa75d7dcda595a86756a6a3d18a2403102396dc29b8e05f3" gracePeriod=2 Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.706888 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7mwg"] Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.738147 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.772721 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:40:19 crc kubenswrapper[4922]: I0218 11:40:19.817022 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:40:20 crc kubenswrapper[4922]: I0218 11:40:20.695127 4922 generic.go:334] "Generic (PLEG): container finished" podID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerID="99eb33433ccb75dafa75d7dcda595a86756a6a3d18a2403102396dc29b8e05f3" exitCode=0 Feb 18 11:40:20 crc kubenswrapper[4922]: I0218 11:40:20.695234 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcnjk" event={"ID":"fc9b41f8-ac9b-4166-a2a6-80326e19254a","Type":"ContainerDied","Data":"99eb33433ccb75dafa75d7dcda595a86756a6a3d18a2403102396dc29b8e05f3"} Feb 18 11:40:20 crc kubenswrapper[4922]: I0218 11:40:20.931030 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.006204 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-utilities\") pod \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.006243 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-catalog-content\") pod \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.006271 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvvjz\" (UniqueName: \"kubernetes.io/projected/fc9b41f8-ac9b-4166-a2a6-80326e19254a-kube-api-access-fvvjz\") pod \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\" (UID: \"fc9b41f8-ac9b-4166-a2a6-80326e19254a\") " Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.007258 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-utilities" (OuterVolumeSpecName: "utilities") pod "fc9b41f8-ac9b-4166-a2a6-80326e19254a" (UID: "fc9b41f8-ac9b-4166-a2a6-80326e19254a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.015739 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9b41f8-ac9b-4166-a2a6-80326e19254a-kube-api-access-fvvjz" (OuterVolumeSpecName: "kube-api-access-fvvjz") pod "fc9b41f8-ac9b-4166-a2a6-80326e19254a" (UID: "fc9b41f8-ac9b-4166-a2a6-80326e19254a"). InnerVolumeSpecName "kube-api-access-fvvjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.060080 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc9b41f8-ac9b-4166-a2a6-80326e19254a" (UID: "fc9b41f8-ac9b-4166-a2a6-80326e19254a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.107779 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvvjz\" (UniqueName: \"kubernetes.io/projected/fc9b41f8-ac9b-4166-a2a6-80326e19254a-kube-api-access-fvvjz\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.107825 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.107839 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc9b41f8-ac9b-4166-a2a6-80326e19254a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.602122 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf4nk"] Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.705290 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lcnjk" event={"ID":"fc9b41f8-ac9b-4166-a2a6-80326e19254a","Type":"ContainerDied","Data":"a8cd6fc2cc1529740a5644abc0364ded3c6ac51bb6647c92d79339c69415a299"} Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.705339 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lcnjk" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.705428 4922 scope.go:117] "RemoveContainer" containerID="99eb33433ccb75dafa75d7dcda595a86756a6a3d18a2403102396dc29b8e05f3" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.705446 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nf4nk" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="registry-server" containerID="cri-o://f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b" gracePeriod=2 Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.728823 4922 scope.go:117] "RemoveContainer" containerID="e63e35cde0feed6b8593caa8621cdbf494c204e6b29cfbb54918e219b2da13b4" Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.737257 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lcnjk"] Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.740469 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lcnjk"] Feb 18 11:40:21 crc kubenswrapper[4922]: I0218 11:40:21.764796 4922 scope.go:117] "RemoveContainer" containerID="120adbdad8789c27eefe3c782cdf2eec2b4857607b20057ec0fcf6bbe6831fd0" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.228864 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.422825 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzdsv\" (UniqueName: \"kubernetes.io/projected/d0a05956-6087-461d-a271-52db98c6032a-kube-api-access-tzdsv\") pod \"d0a05956-6087-461d-a271-52db98c6032a\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.422988 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-utilities\") pod \"d0a05956-6087-461d-a271-52db98c6032a\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.423057 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-catalog-content\") pod \"d0a05956-6087-461d-a271-52db98c6032a\" (UID: \"d0a05956-6087-461d-a271-52db98c6032a\") " Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.425292 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-utilities" (OuterVolumeSpecName: "utilities") pod "d0a05956-6087-461d-a271-52db98c6032a" (UID: "d0a05956-6087-461d-a271-52db98c6032a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.433638 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a05956-6087-461d-a271-52db98c6032a-kube-api-access-tzdsv" (OuterVolumeSpecName: "kube-api-access-tzdsv") pod "d0a05956-6087-461d-a271-52db98c6032a" (UID: "d0a05956-6087-461d-a271-52db98c6032a"). InnerVolumeSpecName "kube-api-access-tzdsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.461929 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0a05956-6087-461d-a271-52db98c6032a" (UID: "d0a05956-6087-461d-a271-52db98c6032a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.524351 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzdsv\" (UniqueName: \"kubernetes.io/projected/d0a05956-6087-461d-a271-52db98c6032a-kube-api-access-tzdsv\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.524399 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.524411 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a05956-6087-461d-a271-52db98c6032a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.719423 4922 generic.go:334] "Generic (PLEG): container finished" podID="d0a05956-6087-461d-a271-52db98c6032a" containerID="f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b" exitCode=0 Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.719508 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf4nk" event={"ID":"d0a05956-6087-461d-a271-52db98c6032a","Type":"ContainerDied","Data":"f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b"} Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.719649 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nf4nk" event={"ID":"d0a05956-6087-461d-a271-52db98c6032a","Type":"ContainerDied","Data":"a1f5dac4997bf52c337b4ff07c819ce5fc50051e9cba18a9a9a797070bbd762b"} Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.719683 4922 scope.go:117] "RemoveContainer" containerID="f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.719862 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nf4nk" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.740725 4922 scope.go:117] "RemoveContainer" containerID="647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.774242 4922 scope.go:117] "RemoveContainer" containerID="e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.777843 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf4nk"] Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.784678 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nf4nk"] Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.802528 4922 scope.go:117] "RemoveContainer" containerID="f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b" Feb 18 11:40:22 crc kubenswrapper[4922]: E0218 11:40:22.803225 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b\": container with ID starting with f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b not found: ID does not exist" containerID="f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.803294 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b"} err="failed to get container status \"f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b\": rpc error: code = NotFound desc = could not find container \"f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b\": container with ID starting with f8ca4a530f7e5d39796311f5abf7b45f579aa1e9d87cb174faf9dad00bb56f6b not found: ID does not exist" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.803343 4922 scope.go:117] "RemoveContainer" containerID="647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8" Feb 18 11:40:22 crc kubenswrapper[4922]: E0218 11:40:22.803905 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8\": container with ID starting with 647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8 not found: ID does not exist" containerID="647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.803959 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8"} err="failed to get container status \"647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8\": rpc error: code = NotFound desc = could not find container \"647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8\": container with ID starting with 647b9976eecd46fff2a94f43bff0b109d69e22d4342128856c40ac682b1d18e8 not found: ID does not exist" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.803996 4922 scope.go:117] "RemoveContainer" containerID="e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0" Feb 18 11:40:22 crc kubenswrapper[4922]: E0218 11:40:22.804577 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0\": container with ID starting with e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0 not found: ID does not exist" containerID="e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.804620 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0"} err="failed to get container status \"e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0\": rpc error: code = NotFound desc = could not find container \"e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0\": container with ID starting with e5b592a27d7235e3492be66b98f4503fc9cce34317fcbba82078ccf86ac362b0 not found: ID does not exist" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.980204 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a05956-6087-461d-a271-52db98c6032a" path="/var/lib/kubelet/pods/d0a05956-6087-461d-a271-52db98c6032a/volumes" Feb 18 11:40:22 crc kubenswrapper[4922]: I0218 11:40:22.981014 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" path="/var/lib/kubelet/pods/fc9b41f8-ac9b-4166-a2a6-80326e19254a/volumes" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.203221 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc84f956-ltmd7"] Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.204628 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" podUID="72ad78b5-bc12-488f-aab9-869895d67ce8" containerName="controller-manager" containerID="cri-o://c09c49cb94f646993c2a4b0fda6fbdd7f540489b4fa899ee2bacffa6390fbbe3" gracePeriod=30 Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.302827 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc"] Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.303429 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" podUID="33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" containerName="route-controller-manager" containerID="cri-o://bebc46c7f9271b765da6413fad509e6ec5685b8951728d70348b293df49ce847" gracePeriod=30 Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.745453 4922 generic.go:334] "Generic (PLEG): container finished" podID="33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" containerID="bebc46c7f9271b765da6413fad509e6ec5685b8951728d70348b293df49ce847" exitCode=0 Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.745540 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" event={"ID":"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4","Type":"ContainerDied","Data":"bebc46c7f9271b765da6413fad509e6ec5685b8951728d70348b293df49ce847"} Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.745593 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" event={"ID":"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4","Type":"ContainerDied","Data":"45dfbf68b12565a4b195c299a902d800748f462bcdfa1500e5b1e3f3b967157e"} Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.745617 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45dfbf68b12565a4b195c299a902d800748f462bcdfa1500e5b1e3f3b967157e" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.746775 4922 generic.go:334] "Generic (PLEG): container finished" podID="72ad78b5-bc12-488f-aab9-869895d67ce8" containerID="c09c49cb94f646993c2a4b0fda6fbdd7f540489b4fa899ee2bacffa6390fbbe3" exitCode=0 Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.746814 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" event={"ID":"72ad78b5-bc12-488f-aab9-869895d67ce8","Type":"ContainerDied","Data":"c09c49cb94f646993c2a4b0fda6fbdd7f540489b4fa899ee2bacffa6390fbbe3"} Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.779322 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.814733 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-client-ca\") pod \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.814787 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-serving-cert\") pod \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.814857 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdp7v\" (UniqueName: \"kubernetes.io/projected/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-kube-api-access-bdp7v\") pod \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.814892 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-config\") pod \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\" (UID: \"33610a6b-c93e-4578-b8d9-93f5a0dbe1a4\") " Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.815761 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-config" (OuterVolumeSpecName: "config") pod "33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" (UID: "33610a6b-c93e-4578-b8d9-93f5a0dbe1a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.816109 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" (UID: "33610a6b-c93e-4578-b8d9-93f5a0dbe1a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.820807 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-kube-api-access-bdp7v" (OuterVolumeSpecName: "kube-api-access-bdp7v") pod "33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" (UID: "33610a6b-c93e-4578-b8d9-93f5a0dbe1a4"). InnerVolumeSpecName "kube-api-access-bdp7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.825702 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" (UID: "33610a6b-c93e-4578-b8d9-93f5a0dbe1a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.916192 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdp7v\" (UniqueName: \"kubernetes.io/projected/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-kube-api-access-bdp7v\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.916701 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.916784 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:27 crc kubenswrapper[4922]: I0218 11:40:27.916846 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.276647 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.320703 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-config\") pod \"72ad78b5-bc12-488f-aab9-869895d67ce8\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.320747 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-proxy-ca-bundles\") pod \"72ad78b5-bc12-488f-aab9-869895d67ce8\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.320765 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-client-ca\") pod \"72ad78b5-bc12-488f-aab9-869895d67ce8\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.320786 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjcrd\" (UniqueName: \"kubernetes.io/projected/72ad78b5-bc12-488f-aab9-869895d67ce8-kube-api-access-kjcrd\") pod \"72ad78b5-bc12-488f-aab9-869895d67ce8\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.321772 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-client-ca" (OuterVolumeSpecName: "client-ca") pod "72ad78b5-bc12-488f-aab9-869895d67ce8" (UID: "72ad78b5-bc12-488f-aab9-869895d67ce8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.321823 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "72ad78b5-bc12-488f-aab9-869895d67ce8" (UID: "72ad78b5-bc12-488f-aab9-869895d67ce8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.321845 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-config" (OuterVolumeSpecName: "config") pod "72ad78b5-bc12-488f-aab9-869895d67ce8" (UID: "72ad78b5-bc12-488f-aab9-869895d67ce8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.322332 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72ad78b5-bc12-488f-aab9-869895d67ce8-serving-cert\") pod \"72ad78b5-bc12-488f-aab9-869895d67ce8\" (UID: \"72ad78b5-bc12-488f-aab9-869895d67ce8\") " Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.322917 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.322944 4922 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.322959 4922 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72ad78b5-bc12-488f-aab9-869895d67ce8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.325663 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ad78b5-bc12-488f-aab9-869895d67ce8-kube-api-access-kjcrd" (OuterVolumeSpecName: "kube-api-access-kjcrd") pod "72ad78b5-bc12-488f-aab9-869895d67ce8" (UID: "72ad78b5-bc12-488f-aab9-869895d67ce8"). InnerVolumeSpecName "kube-api-access-kjcrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.325775 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ad78b5-bc12-488f-aab9-869895d67ce8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "72ad78b5-bc12-488f-aab9-869895d67ce8" (UID: "72ad78b5-bc12-488f-aab9-869895d67ce8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.424492 4922 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72ad78b5-bc12-488f-aab9-869895d67ce8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.424537 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjcrd\" (UniqueName: \"kubernetes.io/projected/72ad78b5-bc12-488f-aab9-869895d67ce8-kube-api-access-kjcrd\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.752623 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.752661 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" event={"ID":"72ad78b5-bc12-488f-aab9-869895d67ce8","Type":"ContainerDied","Data":"9ec1524a75d8bd2f1625be98e138e6cf305c4ac3d285301d8e45c5b5ccd89829"} Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.752630 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc84f956-ltmd7" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.752766 4922 scope.go:117] "RemoveContainer" containerID="c09c49cb94f646993c2a4b0fda6fbdd7f540489b4fa899ee2bacffa6390fbbe3" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.797577 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc84f956-ltmd7"] Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.802925 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cc84f956-ltmd7"] Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.811091 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc"] Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.814658 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c5f5b6bc4-4xglc"] Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.980151 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" path="/var/lib/kubelet/pods/33610a6b-c93e-4578-b8d9-93f5a0dbe1a4/volumes" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.980805 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ad78b5-bc12-488f-aab9-869895d67ce8" path="/var/lib/kubelet/pods/72ad78b5-bc12-488f-aab9-869895d67ce8/volumes" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981245 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w"] Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981467 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerName="extract-content" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981488 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerName="extract-content" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981504 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerName="extract-utilities" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981512 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerName="extract-utilities" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981522 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981529 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981541 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" containerName="route-controller-manager" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981549 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" containerName="route-controller-manager" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981559 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981566 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981587 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerName="extract-utilities" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981596 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerName="extract-utilities" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981605 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ad78b5-bc12-488f-aab9-869895d67ce8" containerName="controller-manager" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981613 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ad78b5-bc12-488f-aab9-869895d67ce8" containerName="controller-manager" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981625 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="extract-content" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981631 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="extract-content" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981640 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981648 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981660 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="extract-utilities" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981668 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="extract-utilities" Feb 18 11:40:28 crc kubenswrapper[4922]: E0218 11:40:28.981679 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerName="extract-content" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981687 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerName="extract-content" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981816 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ad78b5-bc12-488f-aab9-869895d67ce8" containerName="controller-manager" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981830 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a05956-6087-461d-a271-52db98c6032a" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981844 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="33610a6b-c93e-4578-b8d9-93f5a0dbe1a4" containerName="route-controller-manager" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981855 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="80efc1b6-0ae2-4cbf-8dc9-0e2c4d526f54" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.981868 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9b41f8-ac9b-4166-a2a6-80326e19254a" containerName="registry-server" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.982714 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.983583 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65bf898576-dcl7m"] Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.984408 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.990936 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.990977 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.991198 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.996912 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.997209 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.997478 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.997506 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.997667 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.997797 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.997920 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.998805 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 11:40:28 crc kubenswrapper[4922]: I0218 11:40:28.998982 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.001104 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w"] Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.004714 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.009499 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65bf898576-dcl7m"] Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031034 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wg8v\" (UniqueName: \"kubernetes.io/projected/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-kube-api-access-5wg8v\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031076 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-serving-cert\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031136 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-config\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031170 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d32a70-8611-452d-8f4a-04e84753d49d-serving-cert\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031190 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-client-ca\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031206 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84d32a70-8611-452d-8f4a-04e84753d49d-client-ca\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031220 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvld\" (UniqueName: \"kubernetes.io/projected/84d32a70-8611-452d-8f4a-04e84753d49d-kube-api-access-frvld\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031237 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d32a70-8611-452d-8f4a-04e84753d49d-config\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.031258 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-proxy-ca-bundles\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.131916 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d32a70-8611-452d-8f4a-04e84753d49d-serving-cert\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.132202 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-client-ca\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.132292 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84d32a70-8611-452d-8f4a-04e84753d49d-client-ca\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.132406 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frvld\" (UniqueName: \"kubernetes.io/projected/84d32a70-8611-452d-8f4a-04e84753d49d-kube-api-access-frvld\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.132494 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d32a70-8611-452d-8f4a-04e84753d49d-config\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.132585 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-proxy-ca-bundles\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.132675 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wg8v\" (UniqueName: \"kubernetes.io/projected/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-kube-api-access-5wg8v\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.132763 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-serving-cert\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.132858 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-config\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.133542 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84d32a70-8611-452d-8f4a-04e84753d49d-client-ca\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.133778 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-client-ca\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.134092 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-proxy-ca-bundles\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.134531 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-config\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.134638 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d32a70-8611-452d-8f4a-04e84753d49d-config\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.136355 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d32a70-8611-452d-8f4a-04e84753d49d-serving-cert\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.136379 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-serving-cert\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.155244 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frvld\" (UniqueName: \"kubernetes.io/projected/84d32a70-8611-452d-8f4a-04e84753d49d-kube-api-access-frvld\") pod \"route-controller-manager-cc78685b5-mmg7w\" (UID: \"84d32a70-8611-452d-8f4a-04e84753d49d\") " pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.159935 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wg8v\" (UniqueName: \"kubernetes.io/projected/0e5ba5b5-0f09-46fe-bcd1-52d3283562e3-kube-api-access-5wg8v\") pod \"controller-manager-65bf898576-dcl7m\" (UID: \"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3\") " pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.296949 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.304207 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.802900 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w"] Feb 18 11:40:29 crc kubenswrapper[4922]: W0218 11:40:29.805647 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84d32a70_8611_452d_8f4a_04e84753d49d.slice/crio-2b95fa6d35dcdb869da77156680a0c535a0d0e1a92ce39842346105f70fc3ff8 WatchSource:0}: Error finding container 2b95fa6d35dcdb869da77156680a0c535a0d0e1a92ce39842346105f70fc3ff8: Status 404 returned error can't find the container with id 2b95fa6d35dcdb869da77156680a0c535a0d0e1a92ce39842346105f70fc3ff8 Feb 18 11:40:29 crc kubenswrapper[4922]: I0218 11:40:29.862229 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65bf898576-dcl7m"] Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.768269 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" event={"ID":"84d32a70-8611-452d-8f4a-04e84753d49d","Type":"ContainerStarted","Data":"e2a1d3e949a6e433768f3d71473b4d25c266f367076510b6399c9ab451840655"} Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.768339 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" event={"ID":"84d32a70-8611-452d-8f4a-04e84753d49d","Type":"ContainerStarted","Data":"2b95fa6d35dcdb869da77156680a0c535a0d0e1a92ce39842346105f70fc3ff8"} Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.768758 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.769553 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" event={"ID":"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3","Type":"ContainerStarted","Data":"58935e6a8e4f4b43e0cc61ef368e0aeafc91a6f1c962ce9208aa73de4023391b"} Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.769592 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" event={"ID":"0e5ba5b5-0f09-46fe-bcd1-52d3283562e3","Type":"ContainerStarted","Data":"32127c44cac5dc4ac1970ac1b7396179a79cdd3a32cb30e2c9d6219e276ce192"} Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.769858 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.774138 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.774309 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.790625 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cc78685b5-mmg7w" podStartSLOduration=3.790609223 podStartE2EDuration="3.790609223s" podCreationTimestamp="2026-02-18 11:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:40:30.787910753 +0000 UTC m=+232.515614843" watchObservedRunningTime="2026-02-18 11:40:30.790609223 +0000 UTC m=+232.518313303" Feb 18 11:40:30 crc kubenswrapper[4922]: I0218 11:40:30.807573 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65bf898576-dcl7m" podStartSLOduration=3.807553555 podStartE2EDuration="3.807553555s" podCreationTimestamp="2026-02-18 11:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:40:30.806304578 +0000 UTC m=+232.534008658" watchObservedRunningTime="2026-02-18 11:40:30.807553555 +0000 UTC m=+232.535257635" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.880602 4922 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.881621 4922 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.882557 4922 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.882705 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.883671 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c" gracePeriod=15 Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.883737 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6" gracePeriod=15 Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.883690 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a" gracePeriod=15 Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.883704 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8" gracePeriod=15 Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.884074 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe" gracePeriod=15 Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.887751 4922 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.888323 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.888357 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.888406 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.888425 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.888446 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.888650 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.888694 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.888707 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.888747 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.888760 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.888784 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.888796 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.888815 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.888827 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.889181 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.889214 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.889243 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.889260 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.889284 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.889298 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.889317 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 11:40:32 crc kubenswrapper[4922]: E0218 11:40:32.889709 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.889730 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:40:32 crc kubenswrapper[4922]: I0218 11:40:32.933704 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.002435 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.002482 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.002534 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.002569 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.002594 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.003346 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.003404 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.003441 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.104963 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105020 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105044 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105071 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105098 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105125 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105234 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105231 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105099 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105411 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105437 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105532 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105580 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105614 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105645 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.105725 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.228706 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:40:33 crc kubenswrapper[4922]: E0218 11:40:33.256926 4922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18955467ce10eb1c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 11:40:33.25573814 +0000 UTC m=+234.983442240,LastTimestamp:2026-02-18 11:40:33.25573814 +0000 UTC m=+234.983442240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.791248 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125"} Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.791298 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b7186cbec24efd9deedddf85df28365fee1afa7fa17382bc569bd5b4abc33045"} Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.792387 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.795600 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.798409 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.799514 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a" exitCode=0 Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.799594 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c" exitCode=0 Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.799599 4922 scope.go:117] "RemoveContainer" containerID="4e43b874193e0cccc7b1ff6e7d41a35d7777ecc53ad7d7178923e009ca76dbcd" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.799606 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8" exitCode=0 Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.799704 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6" exitCode=2 Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.802300 4922 generic.go:334] "Generic (PLEG): container finished" podID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" containerID="0f5aa83e132da84ebb77b9c8d8371acd88a3509bc96c992baa1b131822fe3971" exitCode=0 Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.802352 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ad37f26c-293d-42c8-a88e-21e0a2c5e05d","Type":"ContainerDied","Data":"0f5aa83e132da84ebb77b9c8d8371acd88a3509bc96c992baa1b131822fe3971"} Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.803124 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:33 crc kubenswrapper[4922]: I0218 11:40:33.803619 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:34 crc kubenswrapper[4922]: I0218 11:40:34.814693 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.285395 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.286394 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.286720 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.292479 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.293413 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.293867 4922 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.294287 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.294793 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.337671 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kube-api-access\") pod \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.337743 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.337819 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.337877 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.337926 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.337973 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kubelet-dir\") pod \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.337970 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338045 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338135 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ad37f26c-293d-42c8-a88e-21e0a2c5e05d" (UID: "ad37f26c-293d-42c8-a88e-21e0a2c5e05d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338201 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-var-lock\") pod \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\" (UID: \"ad37f26c-293d-42c8-a88e-21e0a2c5e05d\") " Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338309 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-var-lock" (OuterVolumeSpecName: "var-lock") pod "ad37f26c-293d-42c8-a88e-21e0a2c5e05d" (UID: "ad37f26c-293d-42c8-a88e-21e0a2c5e05d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338550 4922 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338575 4922 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338593 4922 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338611 4922 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.338628 4922 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.347642 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ad37f26c-293d-42c8-a88e-21e0a2c5e05d" (UID: "ad37f26c-293d-42c8-a88e-21e0a2c5e05d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.440081 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad37f26c-293d-42c8-a88e-21e0a2c5e05d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.824109 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ad37f26c-293d-42c8-a88e-21e0a2c5e05d","Type":"ContainerDied","Data":"2993e8b81cf0d5924b6da2a78a590d591a1692ddbff65fb9a4fa27002842c2e5"} Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.824188 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2993e8b81cf0d5924b6da2a78a590d591a1692ddbff65fb9a4fa27002842c2e5" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.824195 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.833661 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.837001 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.837106 4922 scope.go:117] "RemoveContainer" containerID="9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.836845 4922 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe" exitCode=0 Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.859146 4922 scope.go:117] "RemoveContainer" containerID="90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.860077 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.860493 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.861175 4922 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.867274 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.867927 4922 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.868255 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.875821 4922 scope.go:117] "RemoveContainer" containerID="bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.890517 4922 scope.go:117] "RemoveContainer" containerID="434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.904938 4922 scope.go:117] "RemoveContainer" containerID="dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.923319 4922 scope.go:117] "RemoveContainer" containerID="aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.945007 4922 scope.go:117] "RemoveContainer" containerID="9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a" Feb 18 11:40:35 crc kubenswrapper[4922]: E0218 11:40:35.946026 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\": container with ID starting with 9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a not found: ID does not exist" containerID="9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.946090 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a"} err="failed to get container status \"9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\": rpc error: code = NotFound desc = could not find container \"9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a\": container with ID starting with 9af62821d90e57595091aa9e7f1083acd21cb955ada0837c5ab4f71885ff7f1a not found: ID does not exist" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.946122 4922 scope.go:117] "RemoveContainer" containerID="90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c" Feb 18 11:40:35 crc kubenswrapper[4922]: E0218 11:40:35.946554 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\": container with ID starting with 90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c not found: ID does not exist" containerID="90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.946691 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c"} err="failed to get container status \"90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\": rpc error: code = NotFound desc = could not find container \"90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c\": container with ID starting with 90b891a2fe938b437cbfb28c7715d0ad758541d844a4768434d3c1f1a3f0578c not found: ID does not exist" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.946785 4922 scope.go:117] "RemoveContainer" containerID="bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8" Feb 18 11:40:35 crc kubenswrapper[4922]: E0218 11:40:35.947418 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\": container with ID starting with bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8 not found: ID does not exist" containerID="bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.947474 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8"} err="failed to get container status \"bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\": rpc error: code = NotFound desc = could not find container \"bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8\": container with ID starting with bab832676116da0ae3d9dc70d182e283d7ae2418db02c1ea0062812cab0d71b8 not found: ID does not exist" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.947495 4922 scope.go:117] "RemoveContainer" containerID="434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6" Feb 18 11:40:35 crc kubenswrapper[4922]: E0218 11:40:35.948888 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\": container with ID starting with 434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6 not found: ID does not exist" containerID="434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.949043 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6"} err="failed to get container status \"434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\": rpc error: code = NotFound desc = could not find container \"434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6\": container with ID starting with 434010c9772cf1c9db5578ecd4726ae2de0c0a3923d45b522868677cf78fcaf6 not found: ID does not exist" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.949163 4922 scope.go:117] "RemoveContainer" containerID="dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe" Feb 18 11:40:35 crc kubenswrapper[4922]: E0218 11:40:35.949866 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\": container with ID starting with dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe not found: ID does not exist" containerID="dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.949894 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe"} err="failed to get container status \"dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\": rpc error: code = NotFound desc = could not find container \"dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe\": container with ID starting with dacf3659cbb51e3d8df303470837041f3110f38b42187d40cbb4e53a8750cafe not found: ID does not exist" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.949910 4922 scope.go:117] "RemoveContainer" containerID="aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e" Feb 18 11:40:35 crc kubenswrapper[4922]: E0218 11:40:35.950254 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\": container with ID starting with aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e not found: ID does not exist" containerID="aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e" Feb 18 11:40:35 crc kubenswrapper[4922]: I0218 11:40:35.950385 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e"} err="failed to get container status \"aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\": rpc error: code = NotFound desc = could not find container \"aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e\": container with ID starting with aef2d87a1f410aab2dfaf3663b6d48d4772bcebaafc1d7b87e49613f8216826e not found: ID does not exist" Feb 18 11:40:36 crc kubenswrapper[4922]: I0218 11:40:36.984641 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 11:40:38 crc kubenswrapper[4922]: E0218 11:40:38.432350 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:38 crc kubenswrapper[4922]: E0218 11:40:38.433237 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:38 crc kubenswrapper[4922]: E0218 11:40:38.433752 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:38 crc kubenswrapper[4922]: E0218 11:40:38.434409 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:38 crc kubenswrapper[4922]: E0218 11:40:38.434729 4922 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:38 crc kubenswrapper[4922]: I0218 11:40:38.434776 4922 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 11:40:38 crc kubenswrapper[4922]: E0218 11:40:38.435060 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Feb 18 11:40:38 crc kubenswrapper[4922]: E0218 11:40:38.635824 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Feb 18 11:40:38 crc kubenswrapper[4922]: I0218 11:40:38.976395 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:38 crc kubenswrapper[4922]: I0218 11:40:38.976897 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:39 crc kubenswrapper[4922]: E0218 11:40:39.037755 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Feb 18 11:40:39 crc kubenswrapper[4922]: E0218 11:40:39.839773 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Feb 18 11:40:41 crc kubenswrapper[4922]: E0218 11:40:41.402766 4922 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18955467ce10eb1c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 11:40:33.25573814 +0000 UTC m=+234.983442240,LastTimestamp:2026-02-18 11:40:33.25573814 +0000 UTC m=+234.983442240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 11:40:41 crc kubenswrapper[4922]: E0218 11:40:41.440636 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Feb 18 11:40:44 crc kubenswrapper[4922]: E0218 11:40:44.642329 4922 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="6.4s" Feb 18 11:40:44 crc kubenswrapper[4922]: I0218 11:40:44.750810 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" containerName="oauth-openshift" containerID="cri-o://6267ab86fea6d91bef9e1d2fb055261c2673103810304b496229122d5cfe0a9c" gracePeriod=15 Feb 18 11:40:44 crc kubenswrapper[4922]: I0218 11:40:44.904034 4922 generic.go:334] "Generic (PLEG): container finished" podID="9e36551d-13cd-4a75-a29b-658850b46cb8" containerID="6267ab86fea6d91bef9e1d2fb055261c2673103810304b496229122d5cfe0a9c" exitCode=0 Feb 18 11:40:44 crc kubenswrapper[4922]: I0218 11:40:44.904147 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" event={"ID":"9e36551d-13cd-4a75-a29b-658850b46cb8","Type":"ContainerDied","Data":"6267ab86fea6d91bef9e1d2fb055261c2673103810304b496229122d5cfe0a9c"} Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.364979 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.365689 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.366159 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.366573 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.491716 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-session\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.491891 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpknq\" (UniqueName: \"kubernetes.io/projected/9e36551d-13cd-4a75-a29b-658850b46cb8-kube-api-access-mpknq\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.491952 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-error\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.491988 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-dir\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492049 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-ocp-branding-template\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492099 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-policies\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492138 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-cliconfig\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492181 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-idp-0-file-data\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492170 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492240 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-login\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492304 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-provider-selection\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492389 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-service-ca\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492443 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-serving-cert\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492492 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-router-certs\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492528 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-trusted-ca-bundle\") pod \"9e36551d-13cd-4a75-a29b-658850b46cb8\" (UID: \"9e36551d-13cd-4a75-a29b-658850b46cb8\") " Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.492879 4922 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.494114 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.494138 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.494295 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.495926 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.498672 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.499355 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.499974 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.500322 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.506144 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.506383 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.506599 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.507666 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e36551d-13cd-4a75-a29b-658850b46cb8-kube-api-access-mpknq" (OuterVolumeSpecName: "kube-api-access-mpknq") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "kube-api-access-mpknq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.510586 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9e36551d-13cd-4a75-a29b-658850b46cb8" (UID: "9e36551d-13cd-4a75-a29b-658850b46cb8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594560 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594619 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594641 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594661 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594684 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594705 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594725 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594747 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpknq\" (UniqueName: \"kubernetes.io/projected/9e36551d-13cd-4a75-a29b-658850b46cb8-kube-api-access-mpknq\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594765 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594782 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594803 4922 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594820 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.594841 4922 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e36551d-13cd-4a75-a29b-658850b46cb8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.911580 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" event={"ID":"9e36551d-13cd-4a75-a29b-658850b46cb8","Type":"ContainerDied","Data":"f32dc9e435ded17fa5224c75138342bd559fb562ee11ecfd7567b73083fd4cff"} Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.911635 4922 scope.go:117] "RemoveContainer" containerID="6267ab86fea6d91bef9e1d2fb055261c2673103810304b496229122d5cfe0a9c" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.911715 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.912839 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.913208 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.913469 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.936235 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.936819 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:45 crc kubenswrapper[4922]: I0218 11:40:45.937199 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:46 crc kubenswrapper[4922]: I0218 11:40:46.922664 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 11:40:46 crc kubenswrapper[4922]: I0218 11:40:46.922915 4922 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3" exitCode=1 Feb 18 11:40:46 crc kubenswrapper[4922]: I0218 11:40:46.922962 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3"} Feb 18 11:40:46 crc kubenswrapper[4922]: I0218 11:40:46.923752 4922 scope.go:117] "RemoveContainer" containerID="8dda1daddcf256e52a7dc75c6b5c221fcd8b8fad50c50b6053774f9a8fab57e3" Feb 18 11:40:46 crc kubenswrapper[4922]: I0218 11:40:46.924789 4922 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:46 crc kubenswrapper[4922]: I0218 11:40:46.925219 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:46 crc kubenswrapper[4922]: I0218 11:40:46.925666 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:46 crc kubenswrapper[4922]: I0218 11:40:46.926191 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.934015 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.934088 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"66a332edd3c4a9a02ce364e963c0758fd557d7e92bd10259e23ad02259ab09e2"} Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.935324 4922 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.935949 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.936529 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.937061 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.973017 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.974068 4922 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.974679 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.975039 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.975412 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.994819 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.994854 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:47 crc kubenswrapper[4922]: E0218 11:40:47.995408 4922 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:47 crc kubenswrapper[4922]: I0218 11:40:47.996095 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.946856 4922 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="67e441d2187cdcc2ea099c84fa65e2f200b56661adf368584566eb8d7312fee4" exitCode=0 Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.946907 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"67e441d2187cdcc2ea099c84fa65e2f200b56661adf368584566eb8d7312fee4"} Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.946933 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5c8d25a24af43d093ea8adebb00d2a85e2a250ba2dc1fc9fff1030a5a5a733bd"} Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.947187 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.947201 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:48 crc kubenswrapper[4922]: E0218 11:40:48.947779 4922 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.948075 4922 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.948655 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.949315 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.949936 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.984504 4922 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.984982 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.985312 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.985665 4922 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:48 crc kubenswrapper[4922]: I0218 11:40:48.985886 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.014141 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.018681 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.019233 4922 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.019740 4922 status_manager.go:851] "Failed to get status for pod" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.020393 4922 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.020845 4922 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.021245 4922 status_manager.go:851] "Failed to get status for pod" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" pod="openshift-authentication/oauth-openshift-558db77b4-q7mwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-q7mwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.956612 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"55bd723f32496e423143c9f5042299fcbf7e094a0e7a52680bc120b042d2616b"} Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.956940 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.956952 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cbdc5ff7e662f23c66224b081dc8fc9186de34341b715a86d99a10cbdccfdc3f"} Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.956964 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ca941c710b6879755b97d9936d9c1cf4defb71efa4628851713585ac256bf739"} Feb 18 11:40:49 crc kubenswrapper[4922]: I0218 11:40:49.956982 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2e770938f62620937d9c3a7f4f84873a0e913f10ed1948e9f814cafccc25c5a0"} Feb 18 11:40:50 crc kubenswrapper[4922]: I0218 11:40:50.970352 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:50 crc kubenswrapper[4922]: I0218 11:40:50.970905 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:50 crc kubenswrapper[4922]: I0218 11:40:50.970976 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dba04fd7b3341e318f1e884b3228eb4837fbd5b0209793720cba0ab6a1c029cd"} Feb 18 11:40:50 crc kubenswrapper[4922]: I0218 11:40:50.971105 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:52 crc kubenswrapper[4922]: I0218 11:40:52.996670 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:52 crc kubenswrapper[4922]: I0218 11:40:52.997088 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:53 crc kubenswrapper[4922]: I0218 11:40:53.004692 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:55 crc kubenswrapper[4922]: I0218 11:40:55.981342 4922 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:56 crc kubenswrapper[4922]: I0218 11:40:56.009956 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:56 crc kubenswrapper[4922]: I0218 11:40:56.009991 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:56 crc kubenswrapper[4922]: I0218 11:40:56.017357 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:40:56 crc kubenswrapper[4922]: I0218 11:40:56.063139 4922 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f1156a0a-0a03-46cd-88d7-4b38085976bd" Feb 18 11:40:57 crc kubenswrapper[4922]: I0218 11:40:57.017568 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:57 crc kubenswrapper[4922]: I0218 11:40:57.017881 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:40:57 crc kubenswrapper[4922]: I0218 11:40:57.021916 4922 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f1156a0a-0a03-46cd-88d7-4b38085976bd" Feb 18 11:41:02 crc kubenswrapper[4922]: I0218 11:41:02.361911 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:41:05 crc kubenswrapper[4922]: I0218 11:41:05.246868 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 11:41:05 crc kubenswrapper[4922]: I0218 11:41:05.731916 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 11:41:06 crc kubenswrapper[4922]: I0218 11:41:06.455083 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 11:41:06 crc kubenswrapper[4922]: I0218 11:41:06.512857 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 11:41:06 crc kubenswrapper[4922]: I0218 11:41:06.914005 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 11:41:07 crc kubenswrapper[4922]: I0218 11:41:07.084205 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 11:41:07 crc kubenswrapper[4922]: I0218 11:41:07.639907 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 11:41:07 crc kubenswrapper[4922]: I0218 11:41:07.811447 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 11:41:07 crc kubenswrapper[4922]: I0218 11:41:07.816996 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 11:41:07 crc kubenswrapper[4922]: I0218 11:41:07.915942 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 11:41:07 crc kubenswrapper[4922]: I0218 11:41:07.993223 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 11:41:07 crc kubenswrapper[4922]: I0218 11:41:07.997652 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.506395 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.558242 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.566193 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.584939 4922 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.677205 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.743099 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.810601 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.838483 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.896680 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 11:41:08 crc kubenswrapper[4922]: I0218 11:41:08.958422 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.038910 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.134969 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.149858 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.278782 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.293909 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.347158 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.379323 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.425521 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.435041 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.633276 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.798007 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.815934 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.836749 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.839658 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.854002 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.906495 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 11:41:09 crc kubenswrapper[4922]: I0218 11:41:09.991300 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.066507 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.117163 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.151677 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.176397 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.177018 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.362505 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.388851 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.401540 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.440319 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.444844 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.558546 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.574621 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.637863 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.689546 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.702655 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.723106 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.821926 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.832847 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.862450 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.880760 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 11:41:10 crc kubenswrapper[4922]: I0218 11:41:10.887638 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.153148 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.252747 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.292396 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.328204 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.348438 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.363976 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.364095 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.365838 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.389993 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.420874 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.451483 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.492830 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.510310 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.659900 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.725761 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.806438 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.823225 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.823467 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 11:41:11 crc kubenswrapper[4922]: I0218 11:41:11.921257 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.090128 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.108645 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.241772 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.308589 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.317168 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.351675 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.454033 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.485518 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.486942 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.491717 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.598995 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.636117 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.673773 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.832295 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.918536 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.935878 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 11:41:12 crc kubenswrapper[4922]: I0218 11:41:12.992622 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.008931 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.053803 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.071738 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.119711 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.169793 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.182060 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.234010 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.291337 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.303691 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.303722 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.334031 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.374592 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.405186 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.409123 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.465931 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.567174 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.614258 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.629198 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.661465 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.668845 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.699761 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.830444 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.842080 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 11:41:13 crc kubenswrapper[4922]: I0218 11:41:13.862609 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.044700 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.187309 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.245568 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.257326 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.273822 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.359595 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.511096 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.536607 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.587220 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.658062 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.686174 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.726552 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.756835 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.772251 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.775857 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.788840 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.912562 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.988102 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 11:41:14 crc kubenswrapper[4922]: I0218 11:41:14.991028 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.056744 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.068969 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.084655 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.117403 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.153387 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.228776 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.237388 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.238286 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.482209 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.483090 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.521983 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.588305 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.608229 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.688754 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.735005 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.789235 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.914508 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 11:41:15 crc kubenswrapper[4922]: I0218 11:41:15.920938 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.013914 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.280447 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.382742 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.382784 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.446221 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.487582 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.526750 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.570309 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.604657 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.650643 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.672864 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.700970 4922 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.878526 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 11:41:16 crc kubenswrapper[4922]: I0218 11:41:16.945435 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.023219 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.027932 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.110418 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.127661 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.137421 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.185748 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.377213 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.407153 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.521631 4922 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.522402 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.523349 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.523332933 podStartE2EDuration="45.523332933s" podCreationTimestamp="2026-02-18 11:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:40:56.059827889 +0000 UTC m=+257.787531959" watchObservedRunningTime="2026-02-18 11:41:17.523332933 +0000 UTC m=+279.251037023" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.524271 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.526950 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-q7mwg","openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.527004 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-556766588f-78hf2"] Feb 18 11:41:17 crc kubenswrapper[4922]: E0218 11:41:17.527200 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" containerName="oauth-openshift" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.527218 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" containerName="oauth-openshift" Feb 18 11:41:17 crc kubenswrapper[4922]: E0218 11:41:17.527239 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" containerName="installer" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.527248 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" containerName="installer" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.527391 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" containerName="oauth-openshift" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.527404 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad37f26c-293d-42c8-a88e-21e0a2c5e05d" containerName="installer" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.527625 4922 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.527658 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="384d508e-8f6a-427d-a85d-85bb61d8405e" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.527845 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.531029 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.531285 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.531322 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.531461 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.531681 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.532559 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.532620 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.533715 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.535043 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.535247 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.535445 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.535583 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.538045 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.547678 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.551983 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.558686 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.572511 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.580182 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.587350 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.587335436 podStartE2EDuration="22.587335436s" podCreationTimestamp="2026-02-18 11:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:41:17.586992307 +0000 UTC m=+279.314696407" watchObservedRunningTime="2026-02-18 11:41:17.587335436 +0000 UTC m=+279.315039516" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.592315 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.594059 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.612466 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.658333 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-serving-cert\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.658701 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.658834 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.658941 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-login\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.659046 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.659162 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-audit-policies\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.659280 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.659445 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-service-ca\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.659571 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qms6p\" (UniqueName: \"kubernetes.io/projected/398e726b-2b70-4438-ac1b-bda8ca321928-kube-api-access-qms6p\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.659684 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-router-certs\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.659808 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/398e726b-2b70-4438-ac1b-bda8ca321928-audit-dir\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.659925 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-error\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.660024 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-session\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.660381 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-cliconfig\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.761863 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-cliconfig\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762140 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-serving-cert\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762234 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762332 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762430 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-login\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762512 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762591 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-audit-policies\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762674 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762764 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-service-ca\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762846 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qms6p\" (UniqueName: \"kubernetes.io/projected/398e726b-2b70-4438-ac1b-bda8ca321928-kube-api-access-qms6p\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.762924 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-router-certs\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.763001 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/398e726b-2b70-4438-ac1b-bda8ca321928-audit-dir\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.763094 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-error\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.763177 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-session\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.763613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/398e726b-2b70-4438-ac1b-bda8ca321928-audit-dir\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.763001 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-cliconfig\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.764586 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-audit-policies\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.764599 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-service-ca\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.764868 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.769079 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-serving-cert\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.769603 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-session\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.770689 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.771885 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-error\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.772023 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.773347 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-user-template-login\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.775782 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-router-certs\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.785743 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/398e726b-2b70-4438-ac1b-bda8ca321928-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.793107 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qms6p\" (UniqueName: \"kubernetes.io/projected/398e726b-2b70-4438-ac1b-bda8ca321928-kube-api-access-qms6p\") pod \"oauth-openshift-556766588f-78hf2\" (UID: \"398e726b-2b70-4438-ac1b-bda8ca321928\") " pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.801072 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.801250 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 11:41:17 crc kubenswrapper[4922]: I0218 11:41:17.846440 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.003647 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.059911 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-556766588f-78hf2"] Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.159557 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.205971 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.221141 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.251893 4922 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.260619 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.264764 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.278446 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.297440 4922 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.297690 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125" gracePeriod=5 Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.305572 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.340337 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.431512 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.475712 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-556766588f-78hf2"] Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.601817 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.656612 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.715271 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.747570 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 11:41:18 crc kubenswrapper[4922]: I0218 11:41:18.981581 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e36551d-13cd-4a75-a29b-658850b46cb8" path="/var/lib/kubelet/pods/9e36551d-13cd-4a75-a29b-658850b46cb8/volumes" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.000316 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.012964 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.115762 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.141614 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.150791 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-556766588f-78hf2" event={"ID":"398e726b-2b70-4438-ac1b-bda8ca321928","Type":"ContainerStarted","Data":"2a35855c08d4543fbb694477f09db21d8232028117686782bfed4d4d27c22e2e"} Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.150851 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-556766588f-78hf2" event={"ID":"398e726b-2b70-4438-ac1b-bda8ca321928","Type":"ContainerStarted","Data":"a07aa410aa6179b667b4c581f1683feb197787cff1c9d03d894b65033c360287"} Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.154862 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.158094 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-556766588f-78hf2" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.172076 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-556766588f-78hf2" podStartSLOduration=60.172058754 podStartE2EDuration="1m0.172058754s" podCreationTimestamp="2026-02-18 11:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:41:19.171483138 +0000 UTC m=+280.899187228" watchObservedRunningTime="2026-02-18 11:41:19.172058754 +0000 UTC m=+280.899762834" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.438401 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.449888 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.485320 4922 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.565936 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.857739 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.908546 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 11:41:19 crc kubenswrapper[4922]: I0218 11:41:19.990213 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.007557 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.065549 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.067339 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.168569 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.233219 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.238971 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.276888 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.392300 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.429855 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.475990 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.609776 4922 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.811349 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 11:41:20 crc kubenswrapper[4922]: I0218 11:41:20.976559 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 11:41:23 crc kubenswrapper[4922]: I0218 11:41:23.929571 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 11:41:23 crc kubenswrapper[4922]: I0218 11:41:23.930664 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.039281 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.039337 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.039387 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.039430 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.039465 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.039803 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.039878 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.040099 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.040213 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.048550 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.141058 4922 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.141093 4922 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.141108 4922 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.141128 4922 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.141142 4922 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.192490 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.192565 4922 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125" exitCode=137 Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.192619 4922 scope.go:117] "RemoveContainer" containerID="856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.192679 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.222453 4922 scope.go:117] "RemoveContainer" containerID="856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125" Feb 18 11:41:24 crc kubenswrapper[4922]: E0218 11:41:24.223068 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125\": container with ID starting with 856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125 not found: ID does not exist" containerID="856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.223382 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125"} err="failed to get container status \"856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125\": rpc error: code = NotFound desc = could not find container \"856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125\": container with ID starting with 856502c67849034b25212f7f840344a5e8241e9f0ff22714b0ccd364807de125 not found: ID does not exist" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.983162 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 11:41:24 crc kubenswrapper[4922]: I0218 11:41:24.984016 4922 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 18 11:41:25 crc kubenswrapper[4922]: I0218 11:41:25.033628 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 11:41:25 crc kubenswrapper[4922]: I0218 11:41:25.033691 4922 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="032f3636-eb0f-4013-8788-c79899e94cc8" Feb 18 11:41:25 crc kubenswrapper[4922]: I0218 11:41:25.039212 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 11:41:25 crc kubenswrapper[4922]: I0218 11:41:25.039264 4922 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="032f3636-eb0f-4013-8788-c79899e94cc8" Feb 18 11:41:37 crc kubenswrapper[4922]: I0218 11:41:37.632943 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 11:41:37 crc kubenswrapper[4922]: I0218 11:41:37.782521 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 11:41:38 crc kubenswrapper[4922]: I0218 11:41:38.745026 4922 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 11:41:40 crc kubenswrapper[4922]: I0218 11:41:40.088157 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:41:46 crc kubenswrapper[4922]: I0218 11:41:46.741830 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 11:41:47 crc kubenswrapper[4922]: I0218 11:41:47.588988 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 11:41:47 crc kubenswrapper[4922]: I0218 11:41:47.597599 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 11:41:47 crc kubenswrapper[4922]: I0218 11:41:47.833641 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 11:41:59 crc kubenswrapper[4922]: I0218 11:41:59.937657 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 11:42:00 crc kubenswrapper[4922]: I0218 11:42:00.446410 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.402721 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vs74j"] Feb 18 11:42:02 crc kubenswrapper[4922]: E0218 11:42:02.402977 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.402992 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.403127 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.403801 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.417004 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vs74j"] Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.481961 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/907f9baa-e193-4055-b982-d9a58830ea01-trusted-ca\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.482306 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/907f9baa-e193-4055-b982-d9a58830ea01-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.482337 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-registry-tls\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.482397 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.482429 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8khc\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-kube-api-access-p8khc\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.482456 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/907f9baa-e193-4055-b982-d9a58830ea01-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.482488 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-bound-sa-token\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.482525 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/907f9baa-e193-4055-b982-d9a58830ea01-registry-certificates\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.502682 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.583020 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/907f9baa-e193-4055-b982-d9a58830ea01-trusted-ca\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.583078 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/907f9baa-e193-4055-b982-d9a58830ea01-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.583096 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-registry-tls\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.583126 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8khc\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-kube-api-access-p8khc\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.583147 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/907f9baa-e193-4055-b982-d9a58830ea01-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.583171 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-bound-sa-token\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.583346 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/907f9baa-e193-4055-b982-d9a58830ea01-registry-certificates\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.584282 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/907f9baa-e193-4055-b982-d9a58830ea01-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.584710 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/907f9baa-e193-4055-b982-d9a58830ea01-trusted-ca\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.584770 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/907f9baa-e193-4055-b982-d9a58830ea01-registry-certificates\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.590457 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-registry-tls\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.602895 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-bound-sa-token\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.604075 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/907f9baa-e193-4055-b982-d9a58830ea01-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.605160 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8khc\" (UniqueName: \"kubernetes.io/projected/907f9baa-e193-4055-b982-d9a58830ea01-kube-api-access-p8khc\") pod \"image-registry-66df7c8f76-vs74j\" (UID: \"907f9baa-e193-4055-b982-d9a58830ea01\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:02 crc kubenswrapper[4922]: I0218 11:42:02.766401 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:03 crc kubenswrapper[4922]: I0218 11:42:03.390524 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vs74j"] Feb 18 11:42:03 crc kubenswrapper[4922]: I0218 11:42:03.479910 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" event={"ID":"907f9baa-e193-4055-b982-d9a58830ea01","Type":"ContainerStarted","Data":"42ac242033fda98a291ccd1a343dcd807760f0ab7af506661d3fd52a6c320d25"} Feb 18 11:42:04 crc kubenswrapper[4922]: I0218 11:42:04.487832 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" event={"ID":"907f9baa-e193-4055-b982-d9a58830ea01","Type":"ContainerStarted","Data":"c7f1fff82df0683e8840e22109dfa6a24b674de8548daeedacd7f959f941418e"} Feb 18 11:42:04 crc kubenswrapper[4922]: I0218 11:42:04.488009 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:04 crc kubenswrapper[4922]: I0218 11:42:04.516567 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" podStartSLOduration=2.516547376 podStartE2EDuration="2.516547376s" podCreationTimestamp="2026-02-18 11:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:42:04.513045457 +0000 UTC m=+326.240749537" watchObservedRunningTime="2026-02-18 11:42:04.516547376 +0000 UTC m=+326.244251456" Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.782381 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dzbt"] Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.783185 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7dzbt" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerName="registry-server" containerID="cri-o://8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77" gracePeriod=30 Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.795518 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5lflw"] Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.795787 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5lflw" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerName="registry-server" containerID="cri-o://3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b" gracePeriod=30 Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.803219 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nc7b9"] Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.803611 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" podUID="aa233e7a-8a71-495c-b696-2f3dac9f0ada" containerName="marketplace-operator" containerID="cri-o://47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0" gracePeriod=30 Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.825560 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vjsn"] Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.826085 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5vjsn" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerName="registry-server" containerID="cri-o://7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178" gracePeriod=30 Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.830240 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wz74v"] Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.830421 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wz74v" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="registry-server" containerID="cri-o://f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927" gracePeriod=30 Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.834254 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gjc8w"] Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.835065 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.844272 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gjc8w"] Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.920801 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/452cdbd0-d1e1-491a-8edd-d0f88f602364-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.920841 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq9rm\" (UniqueName: \"kubernetes.io/projected/452cdbd0-d1e1-491a-8edd-d0f88f602364-kube-api-access-tq9rm\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:10 crc kubenswrapper[4922]: I0218 11:42:10.920899 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/452cdbd0-d1e1-491a-8edd-d0f88f602364-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.022482 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/452cdbd0-d1e1-491a-8edd-d0f88f602364-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.022609 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/452cdbd0-d1e1-491a-8edd-d0f88f602364-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.022657 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq9rm\" (UniqueName: \"kubernetes.io/projected/452cdbd0-d1e1-491a-8edd-d0f88f602364-kube-api-access-tq9rm\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.024420 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/452cdbd0-d1e1-491a-8edd-d0f88f602364-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.029409 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/452cdbd0-d1e1-491a-8edd-d0f88f602364-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.040544 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq9rm\" (UniqueName: \"kubernetes.io/projected/452cdbd0-d1e1-491a-8edd-d0f88f602364-kube-api-access-tq9rm\") pod \"marketplace-operator-79b997595-gjc8w\" (UID: \"452cdbd0-d1e1-491a-8edd-d0f88f602364\") " pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.290185 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.293938 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.305079 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.328902 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6nbf\" (UniqueName: \"kubernetes.io/projected/fe4edbcb-8a38-4f30-975f-aa4825192b4e-kube-api-access-k6nbf\") pod \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.328988 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-utilities\") pod \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.329161 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-catalog-content\") pod \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\" (UID: \"fe4edbcb-8a38-4f30-975f-aa4825192b4e\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.330683 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-utilities" (OuterVolumeSpecName: "utilities") pod "fe4edbcb-8a38-4f30-975f-aa4825192b4e" (UID: "fe4edbcb-8a38-4f30-975f-aa4825192b4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.341698 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.352148 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4edbcb-8a38-4f30-975f-aa4825192b4e-kube-api-access-k6nbf" (OuterVolumeSpecName: "kube-api-access-k6nbf") pod "fe4edbcb-8a38-4f30-975f-aa4825192b4e" (UID: "fe4edbcb-8a38-4f30-975f-aa4825192b4e"). InnerVolumeSpecName "kube-api-access-k6nbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.360041 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.362740 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.410220 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe4edbcb-8a38-4f30-975f-aa4825192b4e" (UID: "fe4edbcb-8a38-4f30-975f-aa4825192b4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431184 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-catalog-content\") pod \"47c627d0-6fb9-4b77-b266-74670361fcd6\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431249 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-utilities\") pod \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431287 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-catalog-content\") pod \"bf0d2342-e758-43cc-8c89-adc3ceb98453\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431320 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-operator-metrics\") pod \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431386 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h5ql\" (UniqueName: \"kubernetes.io/projected/aa233e7a-8a71-495c-b696-2f3dac9f0ada-kube-api-access-2h5ql\") pod \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431507 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zw6\" (UniqueName: \"kubernetes.io/projected/bf0d2342-e758-43cc-8c89-adc3ceb98453-kube-api-access-x7zw6\") pod \"bf0d2342-e758-43cc-8c89-adc3ceb98453\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431537 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-utilities\") pod \"bf0d2342-e758-43cc-8c89-adc3ceb98453\" (UID: \"bf0d2342-e758-43cc-8c89-adc3ceb98453\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431569 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-catalog-content\") pod \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431597 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x5ts\" (UniqueName: \"kubernetes.io/projected/9cddee0a-8b13-429b-89b6-e820f8f3ec59-kube-api-access-6x5ts\") pod \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\" (UID: \"9cddee0a-8b13-429b-89b6-e820f8f3ec59\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431733 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z6qm\" (UniqueName: \"kubernetes.io/projected/47c627d0-6fb9-4b77-b266-74670361fcd6-kube-api-access-4z6qm\") pod \"47c627d0-6fb9-4b77-b266-74670361fcd6\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431795 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-trusted-ca\") pod \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\" (UID: \"aa233e7a-8a71-495c-b696-2f3dac9f0ada\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.431818 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-utilities\") pod \"47c627d0-6fb9-4b77-b266-74670361fcd6\" (UID: \"47c627d0-6fb9-4b77-b266-74670361fcd6\") " Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.432149 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6nbf\" (UniqueName: \"kubernetes.io/projected/fe4edbcb-8a38-4f30-975f-aa4825192b4e-kube-api-access-k6nbf\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.432171 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.432184 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4edbcb-8a38-4f30-975f-aa4825192b4e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.433071 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-utilities" (OuterVolumeSpecName: "utilities") pod "47c627d0-6fb9-4b77-b266-74670361fcd6" (UID: "47c627d0-6fb9-4b77-b266-74670361fcd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.433947 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-utilities" (OuterVolumeSpecName: "utilities") pod "bf0d2342-e758-43cc-8c89-adc3ceb98453" (UID: "bf0d2342-e758-43cc-8c89-adc3ceb98453"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.434774 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0d2342-e758-43cc-8c89-adc3ceb98453-kube-api-access-x7zw6" (OuterVolumeSpecName: "kube-api-access-x7zw6") pod "bf0d2342-e758-43cc-8c89-adc3ceb98453" (UID: "bf0d2342-e758-43cc-8c89-adc3ceb98453"). InnerVolumeSpecName "kube-api-access-x7zw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.437174 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "aa233e7a-8a71-495c-b696-2f3dac9f0ada" (UID: "aa233e7a-8a71-495c-b696-2f3dac9f0ada"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.441339 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "aa233e7a-8a71-495c-b696-2f3dac9f0ada" (UID: "aa233e7a-8a71-495c-b696-2f3dac9f0ada"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.441649 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c627d0-6fb9-4b77-b266-74670361fcd6-kube-api-access-4z6qm" (OuterVolumeSpecName: "kube-api-access-4z6qm") pod "47c627d0-6fb9-4b77-b266-74670361fcd6" (UID: "47c627d0-6fb9-4b77-b266-74670361fcd6"). InnerVolumeSpecName "kube-api-access-4z6qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.442185 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-utilities" (OuterVolumeSpecName: "utilities") pod "9cddee0a-8b13-429b-89b6-e820f8f3ec59" (UID: "9cddee0a-8b13-429b-89b6-e820f8f3ec59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.444764 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa233e7a-8a71-495c-b696-2f3dac9f0ada-kube-api-access-2h5ql" (OuterVolumeSpecName: "kube-api-access-2h5ql") pod "aa233e7a-8a71-495c-b696-2f3dac9f0ada" (UID: "aa233e7a-8a71-495c-b696-2f3dac9f0ada"). InnerVolumeSpecName "kube-api-access-2h5ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.444918 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cddee0a-8b13-429b-89b6-e820f8f3ec59-kube-api-access-6x5ts" (OuterVolumeSpecName: "kube-api-access-6x5ts") pod "9cddee0a-8b13-429b-89b6-e820f8f3ec59" (UID: "9cddee0a-8b13-429b-89b6-e820f8f3ec59"). InnerVolumeSpecName "kube-api-access-6x5ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.480514 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf0d2342-e758-43cc-8c89-adc3ceb98453" (UID: "bf0d2342-e758-43cc-8c89-adc3ceb98453"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.500637 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cddee0a-8b13-429b-89b6-e820f8f3ec59" (UID: "9cddee0a-8b13-429b-89b6-e820f8f3ec59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.530124 4922 generic.go:334] "Generic (PLEG): container finished" podID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerID="f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927" exitCode=0 Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.530181 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wz74v" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.530179 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wz74v" event={"ID":"47c627d0-6fb9-4b77-b266-74670361fcd6","Type":"ContainerDied","Data":"f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.530224 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wz74v" event={"ID":"47c627d0-6fb9-4b77-b266-74670361fcd6","Type":"ContainerDied","Data":"86936b3faf99a98562fccc6ce9e3e9f7de7879c692a3b15d363c67f9bb07864e"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.530242 4922 scope.go:117] "RemoveContainer" containerID="f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532704 4922 generic.go:334] "Generic (PLEG): container finished" podID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerID="8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77" exitCode=0 Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532759 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zw6\" (UniqueName: \"kubernetes.io/projected/bf0d2342-e758-43cc-8c89-adc3ceb98453-kube-api-access-x7zw6\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532773 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dzbt" event={"ID":"fe4edbcb-8a38-4f30-975f-aa4825192b4e","Type":"ContainerDied","Data":"8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532800 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dzbt" event={"ID":"fe4edbcb-8a38-4f30-975f-aa4825192b4e","Type":"ContainerDied","Data":"aa0626d406720474e06eba27d9c88b12751f048f72073c63b3e1e91b6784d080"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532782 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532838 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532853 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x5ts\" (UniqueName: \"kubernetes.io/projected/9cddee0a-8b13-429b-89b6-e820f8f3ec59-kube-api-access-6x5ts\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532863 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dzbt" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532870 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z6qm\" (UniqueName: \"kubernetes.io/projected/47c627d0-6fb9-4b77-b266-74670361fcd6-kube-api-access-4z6qm\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532884 4922 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532897 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532910 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cddee0a-8b13-429b-89b6-e820f8f3ec59-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532923 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf0d2342-e758-43cc-8c89-adc3ceb98453-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532936 4922 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa233e7a-8a71-495c-b696-2f3dac9f0ada-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.532950 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h5ql\" (UniqueName: \"kubernetes.io/projected/aa233e7a-8a71-495c-b696-2f3dac9f0ada-kube-api-access-2h5ql\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.537488 4922 generic.go:334] "Generic (PLEG): container finished" podID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerID="7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178" exitCode=0 Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.537545 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vjsn" event={"ID":"bf0d2342-e758-43cc-8c89-adc3ceb98453","Type":"ContainerDied","Data":"7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.537571 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vjsn" event={"ID":"bf0d2342-e758-43cc-8c89-adc3ceb98453","Type":"ContainerDied","Data":"253c1fe74cd3ee618dc4b79bd02124aafb18cc9d88de6205e0a30abc2ecadb34"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.537635 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vjsn" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.548341 4922 generic.go:334] "Generic (PLEG): container finished" podID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerID="3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b" exitCode=0 Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.548485 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5lflw" event={"ID":"9cddee0a-8b13-429b-89b6-e820f8f3ec59","Type":"ContainerDied","Data":"3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.548530 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5lflw" event={"ID":"9cddee0a-8b13-429b-89b6-e820f8f3ec59","Type":"ContainerDied","Data":"c1ce59c10870c2ecd21ad32da1730316e1c9e1d338deac7b1c3b3f7688db298c"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.548579 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5lflw" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.551908 4922 generic.go:334] "Generic (PLEG): container finished" podID="aa233e7a-8a71-495c-b696-2f3dac9f0ada" containerID="47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0" exitCode=0 Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.551940 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" event={"ID":"aa233e7a-8a71-495c-b696-2f3dac9f0ada","Type":"ContainerDied","Data":"47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.551964 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" event={"ID":"aa233e7a-8a71-495c-b696-2f3dac9f0ada","Type":"ContainerDied","Data":"0d0074ed8642e690505cb0bcb8a3858df0eaa0a88849d13791f2daa4f4d6c521"} Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.552038 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nc7b9" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.553535 4922 scope.go:117] "RemoveContainer" containerID="e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.586504 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vjsn"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.588955 4922 scope.go:117] "RemoveContainer" containerID="339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.596960 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vjsn"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.610731 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dzbt"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.614741 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7dzbt"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.621510 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5lflw"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.622571 4922 scope.go:117] "RemoveContainer" containerID="f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.623104 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927\": container with ID starting with f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927 not found: ID does not exist" containerID="f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.623165 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927"} err="failed to get container status \"f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927\": rpc error: code = NotFound desc = could not find container \"f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927\": container with ID starting with f527847320af902ee6f743ec960949ebda5617bce3a441ada1527d1de64a2927 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.623188 4922 scope.go:117] "RemoveContainer" containerID="e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.623498 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1\": container with ID starting with e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1 not found: ID does not exist" containerID="e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.623520 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1"} err="failed to get container status \"e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1\": rpc error: code = NotFound desc = could not find container \"e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1\": container with ID starting with e1f53ef0abd3288d9c5b216794b6258c6d92e85e01c8016df80b39534a5185b1 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.623534 4922 scope.go:117] "RemoveContainer" containerID="339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.623786 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679\": container with ID starting with 339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679 not found: ID does not exist" containerID="339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.623807 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679"} err="failed to get container status \"339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679\": rpc error: code = NotFound desc = could not find container \"339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679\": container with ID starting with 339bdbf56d4f0a3e6321916973a3099db06b916e6ec5ad9353826af30e315679 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.623818 4922 scope.go:117] "RemoveContainer" containerID="8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.625591 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5lflw"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.628832 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nc7b9"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.631887 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nc7b9"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.635853 4922 scope.go:117] "RemoveContainer" containerID="a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.642745 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47c627d0-6fb9-4b77-b266-74670361fcd6" (UID: "47c627d0-6fb9-4b77-b266-74670361fcd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.648835 4922 scope.go:117] "RemoveContainer" containerID="f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.664323 4922 scope.go:117] "RemoveContainer" containerID="8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.664772 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77\": container with ID starting with 8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77 not found: ID does not exist" containerID="8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.664818 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77"} err="failed to get container status \"8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77\": rpc error: code = NotFound desc = could not find container \"8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77\": container with ID starting with 8fdaa83d5f72f970ccf8db14422f820c61b3754fa8464fddf21f1bada5ca3b77 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.664845 4922 scope.go:117] "RemoveContainer" containerID="a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.665242 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395\": container with ID starting with a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395 not found: ID does not exist" containerID="a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.665266 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395"} err="failed to get container status \"a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395\": rpc error: code = NotFound desc = could not find container \"a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395\": container with ID starting with a2d26f75f0731112c78aac584a6d3dba85b5a815a715a0b94935517739394395 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.665295 4922 scope.go:117] "RemoveContainer" containerID="f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.665569 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c\": container with ID starting with f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c not found: ID does not exist" containerID="f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.665591 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c"} err="failed to get container status \"f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c\": rpc error: code = NotFound desc = could not find container \"f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c\": container with ID starting with f76c202bd50c16deefebdd21c812fb020f0a14b30d06f1245a0034d7909dea9c not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.665605 4922 scope.go:117] "RemoveContainer" containerID="7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.678393 4922 scope.go:117] "RemoveContainer" containerID="bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.694744 4922 scope.go:117] "RemoveContainer" containerID="88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.721054 4922 scope.go:117] "RemoveContainer" containerID="7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.721823 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178\": container with ID starting with 7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178 not found: ID does not exist" containerID="7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.721901 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178"} err="failed to get container status \"7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178\": rpc error: code = NotFound desc = could not find container \"7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178\": container with ID starting with 7540772acba64ca21dc3279082ac26db8226d215a021ff0397854fc0d4e90178 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.722012 4922 scope.go:117] "RemoveContainer" containerID="bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.722889 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd\": container with ID starting with bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd not found: ID does not exist" containerID="bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.722945 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd"} err="failed to get container status \"bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd\": rpc error: code = NotFound desc = could not find container \"bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd\": container with ID starting with bb328f8b6bae7db774e27f7422bc433c58fde944326f82372643b61339bf49bd not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.722981 4922 scope.go:117] "RemoveContainer" containerID="88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.723348 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50\": container with ID starting with 88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50 not found: ID does not exist" containerID="88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.723395 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50"} err="failed to get container status \"88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50\": rpc error: code = NotFound desc = could not find container \"88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50\": container with ID starting with 88ce7213e522a2bdfb89014af40e0edefc2f7d30268eda8f28f4c4825cf47c50 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.723411 4922 scope.go:117] "RemoveContainer" containerID="3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.736039 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c627d0-6fb9-4b77-b266-74670361fcd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.742508 4922 scope.go:117] "RemoveContainer" containerID="0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.770542 4922 scope.go:117] "RemoveContainer" containerID="51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.779645 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gjc8w"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.792032 4922 scope.go:117] "RemoveContainer" containerID="3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.793050 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b\": container with ID starting with 3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b not found: ID does not exist" containerID="3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.793088 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b"} err="failed to get container status \"3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b\": rpc error: code = NotFound desc = could not find container \"3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b\": container with ID starting with 3095825ff9ad7d2de643243385e47b2af86046b60063ce1e20f19e93ab18808b not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.793115 4922 scope.go:117] "RemoveContainer" containerID="0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.793421 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181\": container with ID starting with 0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181 not found: ID does not exist" containerID="0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.793453 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181"} err="failed to get container status \"0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181\": rpc error: code = NotFound desc = could not find container \"0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181\": container with ID starting with 0cb6e0bde2199b61ebae0ee2265108d0d8e55760b5cb6ca0dac142b8c2a65181 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.793476 4922 scope.go:117] "RemoveContainer" containerID="51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.793797 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605\": container with ID starting with 51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605 not found: ID does not exist" containerID="51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.793828 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605"} err="failed to get container status \"51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605\": rpc error: code = NotFound desc = could not find container \"51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605\": container with ID starting with 51b067b284f625e9e52dba70686a38665f13c1a97c6bdfd4e23fe3752e04d605 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.793852 4922 scope.go:117] "RemoveContainer" containerID="47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.811878 4922 scope.go:117] "RemoveContainer" containerID="47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0" Feb 18 11:42:11 crc kubenswrapper[4922]: E0218 11:42:11.812398 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0\": container with ID starting with 47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0 not found: ID does not exist" containerID="47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.812431 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0"} err="failed to get container status \"47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0\": rpc error: code = NotFound desc = could not find container \"47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0\": container with ID starting with 47f81b843aae8ab322577ee88976e3fa1b06eba4eb120e58e24ce0de633eadc0 not found: ID does not exist" Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.865895 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wz74v"] Feb 18 11:42:11 crc kubenswrapper[4922]: I0218 11:42:11.869950 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wz74v"] Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.570179 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" event={"ID":"452cdbd0-d1e1-491a-8edd-d0f88f602364","Type":"ContainerStarted","Data":"08d143d8ffe4dee8c593cdef88229d35623dba820d4fea7632533876ebe75223"} Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.570435 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.570450 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" event={"ID":"452cdbd0-d1e1-491a-8edd-d0f88f602364","Type":"ContainerStarted","Data":"5c956ec9346cb8bb4e092814f3abd8855e5b58608bc7328e5a8361ab35542ede"} Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.574907 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.588087 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gjc8w" podStartSLOduration=2.588062174 podStartE2EDuration="2.588062174s" podCreationTimestamp="2026-02-18 11:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:42:12.584496537 +0000 UTC m=+334.312200627" watchObservedRunningTime="2026-02-18 11:42:12.588062174 +0000 UTC m=+334.315766274" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.982010 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" path="/var/lib/kubelet/pods/47c627d0-6fb9-4b77-b266-74670361fcd6/volumes" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.983765 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" path="/var/lib/kubelet/pods/9cddee0a-8b13-429b-89b6-e820f8f3ec59/volumes" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.984561 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa233e7a-8a71-495c-b696-2f3dac9f0ada" path="/var/lib/kubelet/pods/aa233e7a-8a71-495c-b696-2f3dac9f0ada/volumes" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.987218 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" path="/var/lib/kubelet/pods/bf0d2342-e758-43cc-8c89-adc3ceb98453/volumes" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.987892 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" path="/var/lib/kubelet/pods/fe4edbcb-8a38-4f30-975f-aa4825192b4e/volumes" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998054 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cfw5z"] Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998314 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerName="extract-content" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998343 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerName="extract-content" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998353 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998380 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998389 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerName="extract-utilities" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998399 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerName="extract-utilities" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998409 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerName="extract-utilities" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998415 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerName="extract-utilities" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998423 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998429 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998436 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="extract-utilities" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998460 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="extract-utilities" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998470 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerName="extract-content" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998476 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerName="extract-content" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998488 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerName="extract-content" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998493 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerName="extract-content" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998501 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="extract-content" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998508 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="extract-content" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998536 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998543 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998552 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa233e7a-8a71-495c-b696-2f3dac9f0ada" containerName="marketplace-operator" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998557 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa233e7a-8a71-495c-b696-2f3dac9f0ada" containerName="marketplace-operator" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998566 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerName="extract-utilities" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998572 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerName="extract-utilities" Feb 18 11:42:12 crc kubenswrapper[4922]: E0218 11:42:12.998579 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998586 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998760 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c627d0-6fb9-4b77-b266-74670361fcd6" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998792 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4edbcb-8a38-4f30-975f-aa4825192b4e" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998805 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa233e7a-8a71-495c-b696-2f3dac9f0ada" containerName="marketplace-operator" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998812 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0d2342-e758-43cc-8c89-adc3ceb98453" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.998821 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cddee0a-8b13-429b-89b6-e820f8f3ec59" containerName="registry-server" Feb 18 11:42:12 crc kubenswrapper[4922]: I0218 11:42:12.999753 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.001856 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.002226 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfw5z"] Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.054951 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/523054ef-f8bb-4c7d-9baa-47191e299fcd-utilities\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.055043 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5pzr\" (UniqueName: \"kubernetes.io/projected/523054ef-f8bb-4c7d-9baa-47191e299fcd-kube-api-access-d5pzr\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.055097 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/523054ef-f8bb-4c7d-9baa-47191e299fcd-catalog-content\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.156245 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/523054ef-f8bb-4c7d-9baa-47191e299fcd-utilities\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.156327 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5pzr\" (UniqueName: \"kubernetes.io/projected/523054ef-f8bb-4c7d-9baa-47191e299fcd-kube-api-access-d5pzr\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.156385 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/523054ef-f8bb-4c7d-9baa-47191e299fcd-catalog-content\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.156710 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/523054ef-f8bb-4c7d-9baa-47191e299fcd-utilities\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.156783 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/523054ef-f8bb-4c7d-9baa-47191e299fcd-catalog-content\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.176964 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5pzr\" (UniqueName: \"kubernetes.io/projected/523054ef-f8bb-4c7d-9baa-47191e299fcd-kube-api-access-d5pzr\") pod \"certified-operators-cfw5z\" (UID: \"523054ef-f8bb-4c7d-9baa-47191e299fcd\") " pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.202866 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6bqhb"] Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.204418 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.208417 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.213379 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bqhb"] Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.257537 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q98tn\" (UniqueName: \"kubernetes.io/projected/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-kube-api-access-q98tn\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.257613 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-utilities\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.257787 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-catalog-content\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.326041 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.359971 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q98tn\" (UniqueName: \"kubernetes.io/projected/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-kube-api-access-q98tn\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.360115 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-utilities\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.360219 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-catalog-content\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.362136 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-catalog-content\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.362458 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-utilities\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.375227 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q98tn\" (UniqueName: \"kubernetes.io/projected/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-kube-api-access-q98tn\") pod \"redhat-marketplace-6bqhb\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.527583 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.752743 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfw5z"] Feb 18 11:42:13 crc kubenswrapper[4922]: I0218 11:42:13.901731 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bqhb"] Feb 18 11:42:14 crc kubenswrapper[4922]: I0218 11:42:14.581981 4922 generic.go:334] "Generic (PLEG): container finished" podID="523054ef-f8bb-4c7d-9baa-47191e299fcd" containerID="5c138e51dd53973f4bcf82a962a1cbb32de5df0803519276a5f62e791887d52f" exitCode=0 Feb 18 11:42:14 crc kubenswrapper[4922]: I0218 11:42:14.582033 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfw5z" event={"ID":"523054ef-f8bb-4c7d-9baa-47191e299fcd","Type":"ContainerDied","Data":"5c138e51dd53973f4bcf82a962a1cbb32de5df0803519276a5f62e791887d52f"} Feb 18 11:42:14 crc kubenswrapper[4922]: I0218 11:42:14.582163 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfw5z" event={"ID":"523054ef-f8bb-4c7d-9baa-47191e299fcd","Type":"ContainerStarted","Data":"42ee5053d4102f9b61bc44a59c440005e5ec25fd25263fc4c4467a03f68e1731"} Feb 18 11:42:14 crc kubenswrapper[4922]: I0218 11:42:14.584348 4922 generic.go:334] "Generic (PLEG): container finished" podID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerID="66cf4f94781e4ece125829fc4a1a5acf7beefaa52399d35c3cf834cf5448be6c" exitCode=0 Feb 18 11:42:14 crc kubenswrapper[4922]: I0218 11:42:14.584419 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bqhb" event={"ID":"f50f60ee-09dd-45e2-aab0-384f2ff99b7d","Type":"ContainerDied","Data":"66cf4f94781e4ece125829fc4a1a5acf7beefaa52399d35c3cf834cf5448be6c"} Feb 18 11:42:14 crc kubenswrapper[4922]: I0218 11:42:14.584484 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bqhb" event={"ID":"f50f60ee-09dd-45e2-aab0-384f2ff99b7d","Type":"ContainerStarted","Data":"81c45efcdac9362802de58dd27bf893197de86f7ebd3df5f164f632b261368cc"} Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.401777 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h9zn6"] Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.410217 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.413593 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.419413 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9zn6"] Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.489138 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f779c873-d525-428d-88ed-828d00bf17eb-utilities\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.489205 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bhsn\" (UniqueName: \"kubernetes.io/projected/f779c873-d525-428d-88ed-828d00bf17eb-kube-api-access-8bhsn\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.489288 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f779c873-d525-428d-88ed-828d00bf17eb-catalog-content\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.591161 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f779c873-d525-428d-88ed-828d00bf17eb-utilities\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.591239 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bhsn\" (UniqueName: \"kubernetes.io/projected/f779c873-d525-428d-88ed-828d00bf17eb-kube-api-access-8bhsn\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.591284 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f779c873-d525-428d-88ed-828d00bf17eb-catalog-content\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.591852 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f779c873-d525-428d-88ed-828d00bf17eb-catalog-content\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.592144 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f779c873-d525-428d-88ed-828d00bf17eb-utilities\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.595045 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfw5z" event={"ID":"523054ef-f8bb-4c7d-9baa-47191e299fcd","Type":"ContainerStarted","Data":"10b3eb493fda827c2b5df4c71ad15012442f5c255c7bce9fa21f0375147ef57f"} Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.596634 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-48d4t"] Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.598096 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bqhb" event={"ID":"f50f60ee-09dd-45e2-aab0-384f2ff99b7d","Type":"ContainerStarted","Data":"dbaac2d048b1bf28c0a544d56094e4190ceeb213bd8b9b6ab6c58e5f9ca98a21"} Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.598178 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.603747 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.610594 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48d4t"] Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.618752 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bhsn\" (UniqueName: \"kubernetes.io/projected/f779c873-d525-428d-88ed-828d00bf17eb-kube-api-access-8bhsn\") pod \"community-operators-h9zn6\" (UID: \"f779c873-d525-428d-88ed-828d00bf17eb\") " pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.691911 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-utilities\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.691972 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-catalog-content\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.691999 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2pw6\" (UniqueName: \"kubernetes.io/projected/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-kube-api-access-g2pw6\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.724083 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.793209 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-utilities\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.793436 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-catalog-content\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.793469 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2pw6\" (UniqueName: \"kubernetes.io/projected/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-kube-api-access-g2pw6\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.793830 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-utilities\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.794091 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-catalog-content\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.812922 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2pw6\" (UniqueName: \"kubernetes.io/projected/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-kube-api-access-g2pw6\") pod \"redhat-operators-48d4t\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:15 crc kubenswrapper[4922]: I0218 11:42:15.909967 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.136784 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9zn6"] Feb 18 11:42:16 crc kubenswrapper[4922]: W0218 11:42:16.144190 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf779c873_d525_428d_88ed_828d00bf17eb.slice/crio-e5d3a35c50339e26242c90315c245635e9fe48bbb2338a83a7676b1899b5b36f WatchSource:0}: Error finding container e5d3a35c50339e26242c90315c245635e9fe48bbb2338a83a7676b1899b5b36f: Status 404 returned error can't find the container with id e5d3a35c50339e26242c90315c245635e9fe48bbb2338a83a7676b1899b5b36f Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.284715 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48d4t"] Feb 18 11:42:16 crc kubenswrapper[4922]: W0218 11:42:16.290380 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1faa074_0925_4c46_b2d7_3d5590f2bfb2.slice/crio-b63935ff693150fab51446b824532dfaf0826366b92ab601cfc7bb7ab4edc1ce WatchSource:0}: Error finding container b63935ff693150fab51446b824532dfaf0826366b92ab601cfc7bb7ab4edc1ce: Status 404 returned error can't find the container with id b63935ff693150fab51446b824532dfaf0826366b92ab601cfc7bb7ab4edc1ce Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.604964 4922 generic.go:334] "Generic (PLEG): container finished" podID="f779c873-d525-428d-88ed-828d00bf17eb" containerID="7c9a3d7b12c16a99991f15769b2726e6f0cfebcdccc145aa918d5e678eb3f45b" exitCode=0 Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.605041 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9zn6" event={"ID":"f779c873-d525-428d-88ed-828d00bf17eb","Type":"ContainerDied","Data":"7c9a3d7b12c16a99991f15769b2726e6f0cfebcdccc145aa918d5e678eb3f45b"} Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.605214 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9zn6" event={"ID":"f779c873-d525-428d-88ed-828d00bf17eb","Type":"ContainerStarted","Data":"e5d3a35c50339e26242c90315c245635e9fe48bbb2338a83a7676b1899b5b36f"} Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.609012 4922 generic.go:334] "Generic (PLEG): container finished" podID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerID="dbaac2d048b1bf28c0a544d56094e4190ceeb213bd8b9b6ab6c58e5f9ca98a21" exitCode=0 Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.609071 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bqhb" event={"ID":"f50f60ee-09dd-45e2-aab0-384f2ff99b7d","Type":"ContainerDied","Data":"dbaac2d048b1bf28c0a544d56094e4190ceeb213bd8b9b6ab6c58e5f9ca98a21"} Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.611787 4922 generic.go:334] "Generic (PLEG): container finished" podID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerID="9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4" exitCode=0 Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.611863 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48d4t" event={"ID":"f1faa074-0925-4c46-b2d7-3d5590f2bfb2","Type":"ContainerDied","Data":"9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4"} Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.611886 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48d4t" event={"ID":"f1faa074-0925-4c46-b2d7-3d5590f2bfb2","Type":"ContainerStarted","Data":"b63935ff693150fab51446b824532dfaf0826366b92ab601cfc7bb7ab4edc1ce"} Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.614520 4922 generic.go:334] "Generic (PLEG): container finished" podID="523054ef-f8bb-4c7d-9baa-47191e299fcd" containerID="10b3eb493fda827c2b5df4c71ad15012442f5c255c7bce9fa21f0375147ef57f" exitCode=0 Feb 18 11:42:16 crc kubenswrapper[4922]: I0218 11:42:16.614543 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfw5z" event={"ID":"523054ef-f8bb-4c7d-9baa-47191e299fcd","Type":"ContainerDied","Data":"10b3eb493fda827c2b5df4c71ad15012442f5c255c7bce9fa21f0375147ef57f"} Feb 18 11:42:17 crc kubenswrapper[4922]: I0218 11:42:17.622267 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfw5z" event={"ID":"523054ef-f8bb-4c7d-9baa-47191e299fcd","Type":"ContainerStarted","Data":"13290870f6ffe0c47237c67e10d8cbc97de5e07ed25e7172602da92c17e8b970"} Feb 18 11:42:17 crc kubenswrapper[4922]: I0218 11:42:17.624607 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9zn6" event={"ID":"f779c873-d525-428d-88ed-828d00bf17eb","Type":"ContainerStarted","Data":"c2bca754efca2700d892df8d249dabe432437c2aabf48986910da880395d4d75"} Feb 18 11:42:17 crc kubenswrapper[4922]: I0218 11:42:17.626777 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bqhb" event={"ID":"f50f60ee-09dd-45e2-aab0-384f2ff99b7d","Type":"ContainerStarted","Data":"3c30617879326ec9fb7f35dddf404540d84efb8a598a6b6c0ad31087cd8073f1"} Feb 18 11:42:17 crc kubenswrapper[4922]: I0218 11:42:17.629108 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48d4t" event={"ID":"f1faa074-0925-4c46-b2d7-3d5590f2bfb2","Type":"ContainerStarted","Data":"23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6"} Feb 18 11:42:17 crc kubenswrapper[4922]: I0218 11:42:17.637303 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cfw5z" podStartSLOduration=3.231219018 podStartE2EDuration="5.637287118s" podCreationTimestamp="2026-02-18 11:42:12 +0000 UTC" firstStartedPulling="2026-02-18 11:42:14.583797102 +0000 UTC m=+336.311501182" lastFinishedPulling="2026-02-18 11:42:16.989865202 +0000 UTC m=+338.717569282" observedRunningTime="2026-02-18 11:42:17.635876534 +0000 UTC m=+339.363580614" watchObservedRunningTime="2026-02-18 11:42:17.637287118 +0000 UTC m=+339.364991198" Feb 18 11:42:17 crc kubenswrapper[4922]: I0218 11:42:17.657971 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6bqhb" podStartSLOduration=2.08840754 podStartE2EDuration="4.657952757s" podCreationTimestamp="2026-02-18 11:42:13 +0000 UTC" firstStartedPulling="2026-02-18 11:42:14.585310948 +0000 UTC m=+336.313015018" lastFinishedPulling="2026-02-18 11:42:17.154856155 +0000 UTC m=+338.882560235" observedRunningTime="2026-02-18 11:42:17.655853056 +0000 UTC m=+339.383557146" watchObservedRunningTime="2026-02-18 11:42:17.657952757 +0000 UTC m=+339.385656837" Feb 18 11:42:18 crc kubenswrapper[4922]: I0218 11:42:18.647163 4922 generic.go:334] "Generic (PLEG): container finished" podID="f779c873-d525-428d-88ed-828d00bf17eb" containerID="c2bca754efca2700d892df8d249dabe432437c2aabf48986910da880395d4d75" exitCode=0 Feb 18 11:42:18 crc kubenswrapper[4922]: I0218 11:42:18.647236 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9zn6" event={"ID":"f779c873-d525-428d-88ed-828d00bf17eb","Type":"ContainerDied","Data":"c2bca754efca2700d892df8d249dabe432437c2aabf48986910da880395d4d75"} Feb 18 11:42:18 crc kubenswrapper[4922]: I0218 11:42:18.650631 4922 generic.go:334] "Generic (PLEG): container finished" podID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerID="23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6" exitCode=0 Feb 18 11:42:18 crc kubenswrapper[4922]: I0218 11:42:18.650685 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48d4t" event={"ID":"f1faa074-0925-4c46-b2d7-3d5590f2bfb2","Type":"ContainerDied","Data":"23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6"} Feb 18 11:42:19 crc kubenswrapper[4922]: I0218 11:42:19.659031 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48d4t" event={"ID":"f1faa074-0925-4c46-b2d7-3d5590f2bfb2","Type":"ContainerStarted","Data":"377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55"} Feb 18 11:42:19 crc kubenswrapper[4922]: I0218 11:42:19.661650 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9zn6" event={"ID":"f779c873-d525-428d-88ed-828d00bf17eb","Type":"ContainerStarted","Data":"88c1b0603c2e2c8ecd6b5d934399d65a9fdbe60917ef93db2d5c5f480e2fae4f"} Feb 18 11:42:19 crc kubenswrapper[4922]: I0218 11:42:19.680201 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-48d4t" podStartSLOduration=2.1458161159999998 podStartE2EDuration="4.680179624s" podCreationTimestamp="2026-02-18 11:42:15 +0000 UTC" firstStartedPulling="2026-02-18 11:42:16.612940255 +0000 UTC m=+338.340644335" lastFinishedPulling="2026-02-18 11:42:19.147303753 +0000 UTC m=+340.875007843" observedRunningTime="2026-02-18 11:42:19.674416065 +0000 UTC m=+341.402120165" watchObservedRunningTime="2026-02-18 11:42:19.680179624 +0000 UTC m=+341.407883704" Feb 18 11:42:19 crc kubenswrapper[4922]: I0218 11:42:19.695637 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h9zn6" podStartSLOduration=2.185069312 podStartE2EDuration="4.695618056s" podCreationTimestamp="2026-02-18 11:42:15 +0000 UTC" firstStartedPulling="2026-02-18 11:42:16.606691304 +0000 UTC m=+338.334395404" lastFinishedPulling="2026-02-18 11:42:19.117240068 +0000 UTC m=+340.844944148" observedRunningTime="2026-02-18 11:42:19.693098235 +0000 UTC m=+341.420802325" watchObservedRunningTime="2026-02-18 11:42:19.695618056 +0000 UTC m=+341.423322136" Feb 18 11:42:22 crc kubenswrapper[4922]: I0218 11:42:22.777127 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vs74j" Feb 18 11:42:22 crc kubenswrapper[4922]: I0218 11:42:22.841290 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wt2rf"] Feb 18 11:42:23 crc kubenswrapper[4922]: I0218 11:42:23.326419 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:23 crc kubenswrapper[4922]: I0218 11:42:23.326695 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:23 crc kubenswrapper[4922]: I0218 11:42:23.369291 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:23 crc kubenswrapper[4922]: I0218 11:42:23.528208 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:23 crc kubenswrapper[4922]: I0218 11:42:23.528279 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:23 crc kubenswrapper[4922]: I0218 11:42:23.565699 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:23 crc kubenswrapper[4922]: I0218 11:42:23.745237 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 11:42:23 crc kubenswrapper[4922]: I0218 11:42:23.746832 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cfw5z" Feb 18 11:42:25 crc kubenswrapper[4922]: I0218 11:42:25.725164 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:25 crc kubenswrapper[4922]: I0218 11:42:25.725740 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:25 crc kubenswrapper[4922]: I0218 11:42:25.789694 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:25 crc kubenswrapper[4922]: I0218 11:42:25.911712 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:25 crc kubenswrapper[4922]: I0218 11:42:25.913078 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:25 crc kubenswrapper[4922]: I0218 11:42:25.963536 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:26 crc kubenswrapper[4922]: I0218 11:42:26.768217 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h9zn6" Feb 18 11:42:26 crc kubenswrapper[4922]: I0218 11:42:26.778898 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 11:42:39 crc kubenswrapper[4922]: I0218 11:42:39.807948 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:42:39 crc kubenswrapper[4922]: I0218 11:42:39.808639 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:42:47 crc kubenswrapper[4922]: I0218 11:42:47.895548 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" podUID="fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" containerName="registry" containerID="cri-o://9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9" gracePeriod=30 Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.292221 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.347614 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-tls\") pod \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.347721 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-installation-pull-secrets\") pod \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.347818 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-ca-trust-extracted\") pod \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.347894 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-certificates\") pod \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.347948 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56jvt\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-kube-api-access-56jvt\") pod \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.348081 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-bound-sa-token\") pod \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.348117 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-trusted-ca\") pod \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.348300 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\" (UID: \"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0\") " Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.350140 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.350901 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.357092 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.359693 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.367054 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.370330 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.376709 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-kube-api-access-56jvt" (OuterVolumeSpecName: "kube-api-access-56jvt") pod "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0"). InnerVolumeSpecName "kube-api-access-56jvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.383454 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" (UID: "fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.449957 4922 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.450004 4922 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.450020 4922 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.450032 4922 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.450043 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56jvt\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-kube-api-access-56jvt\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.450054 4922 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.450064 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.837554 4922 generic.go:334] "Generic (PLEG): container finished" podID="fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" containerID="9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9" exitCode=0 Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.837613 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" event={"ID":"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0","Type":"ContainerDied","Data":"9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9"} Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.837672 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.837696 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wt2rf" event={"ID":"fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0","Type":"ContainerDied","Data":"ea1a8d4dd7f49c86b94455df1de54cf958a56e87072aa59f4711d51402743ec5"} Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.837737 4922 scope.go:117] "RemoveContainer" containerID="9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.864911 4922 scope.go:117] "RemoveContainer" containerID="9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9" Feb 18 11:42:48 crc kubenswrapper[4922]: E0218 11:42:48.866227 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9\": container with ID starting with 9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9 not found: ID does not exist" containerID="9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.866313 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9"} err="failed to get container status \"9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9\": rpc error: code = NotFound desc = could not find container \"9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9\": container with ID starting with 9bc6cc301e873229c1127137e22f6a9b28ccb1bc2ceb7bd9d965728c3ed9f3a9 not found: ID does not exist" Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.890116 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wt2rf"] Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.896840 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wt2rf"] Feb 18 11:42:48 crc kubenswrapper[4922]: I0218 11:42:48.983763 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" path="/var/lib/kubelet/pods/fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0/volumes" Feb 18 11:43:04 crc kubenswrapper[4922]: I0218 11:43:03.775598 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-x69t8" podUID="a768634b-1586-4ba2-9a05-6a88f5befea1" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:43:09 crc kubenswrapper[4922]: I0218 11:43:09.807251 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:43:09 crc kubenswrapper[4922]: I0218 11:43:09.808021 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:43:39 crc kubenswrapper[4922]: I0218 11:43:39.807215 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:43:39 crc kubenswrapper[4922]: I0218 11:43:39.807799 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:43:39 crc kubenswrapper[4922]: I0218 11:43:39.807846 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:43:39 crc kubenswrapper[4922]: I0218 11:43:39.808329 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1a0cd1a4059d80457c2ea470bb511d4a674a1cbfe4b38356d0636a94629e11f"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 11:43:39 crc kubenswrapper[4922]: I0218 11:43:39.808402 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://f1a0cd1a4059d80457c2ea470bb511d4a674a1cbfe4b38356d0636a94629e11f" gracePeriod=600 Feb 18 11:43:40 crc kubenswrapper[4922]: I0218 11:43:40.247267 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="f1a0cd1a4059d80457c2ea470bb511d4a674a1cbfe4b38356d0636a94629e11f" exitCode=0 Feb 18 11:43:40 crc kubenswrapper[4922]: I0218 11:43:40.247386 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"f1a0cd1a4059d80457c2ea470bb511d4a674a1cbfe4b38356d0636a94629e11f"} Feb 18 11:43:40 crc kubenswrapper[4922]: I0218 11:43:40.247653 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"02db91f6fc3d787614fdd835995da44a7dcbace8afdf71101bf73a1aefb8d53b"} Feb 18 11:43:40 crc kubenswrapper[4922]: I0218 11:43:40.247682 4922 scope.go:117] "RemoveContainer" containerID="653a3be8615ffc467d4be70b77ab62c837c24df920515a9965ec20ce2941d86b" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.178331 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4"] Feb 18 11:45:00 crc kubenswrapper[4922]: E0218 11:45:00.179086 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" containerName="registry" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.179100 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" containerName="registry" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.179211 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9f6f1e-d5ab-4de4-b8b4-ee14f742f2e0" containerName="registry" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.179632 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.181620 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.181621 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.187128 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4"] Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.291289 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhc4\" (UniqueName: \"kubernetes.io/projected/74bd299a-42ac-4c5a-93ff-5809da5517b3-kube-api-access-tzhc4\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.291842 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74bd299a-42ac-4c5a-93ff-5809da5517b3-secret-volume\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.291933 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74bd299a-42ac-4c5a-93ff-5809da5517b3-config-volume\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.393583 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74bd299a-42ac-4c5a-93ff-5809da5517b3-secret-volume\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.393644 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74bd299a-42ac-4c5a-93ff-5809da5517b3-config-volume\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.393706 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhc4\" (UniqueName: \"kubernetes.io/projected/74bd299a-42ac-4c5a-93ff-5809da5517b3-kube-api-access-tzhc4\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.394854 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74bd299a-42ac-4c5a-93ff-5809da5517b3-config-volume\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.408927 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74bd299a-42ac-4c5a-93ff-5809da5517b3-secret-volume\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.423923 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhc4\" (UniqueName: \"kubernetes.io/projected/74bd299a-42ac-4c5a-93ff-5809da5517b3-kube-api-access-tzhc4\") pod \"collect-profiles-29523585-sckv4\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.500235 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.699074 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4"] Feb 18 11:45:00 crc kubenswrapper[4922]: I0218 11:45:00.790091 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" event={"ID":"74bd299a-42ac-4c5a-93ff-5809da5517b3","Type":"ContainerStarted","Data":"c7c0dc7121ed5ba5ab0dbf0029cc1d1f88e3c9c5a36c6287bbf69b39c0f6db47"} Feb 18 11:45:01 crc kubenswrapper[4922]: I0218 11:45:01.796301 4922 generic.go:334] "Generic (PLEG): container finished" podID="74bd299a-42ac-4c5a-93ff-5809da5517b3" containerID="1e57799f76ef61ec42eb4d7506cd5272291d57133dccf113ac6a6ed7f96b16b6" exitCode=0 Feb 18 11:45:01 crc kubenswrapper[4922]: I0218 11:45:01.796563 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" event={"ID":"74bd299a-42ac-4c5a-93ff-5809da5517b3","Type":"ContainerDied","Data":"1e57799f76ef61ec42eb4d7506cd5272291d57133dccf113ac6a6ed7f96b16b6"} Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.304550 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.506176 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74bd299a-42ac-4c5a-93ff-5809da5517b3-config-volume\") pod \"74bd299a-42ac-4c5a-93ff-5809da5517b3\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.506289 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74bd299a-42ac-4c5a-93ff-5809da5517b3-secret-volume\") pod \"74bd299a-42ac-4c5a-93ff-5809da5517b3\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.506547 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzhc4\" (UniqueName: \"kubernetes.io/projected/74bd299a-42ac-4c5a-93ff-5809da5517b3-kube-api-access-tzhc4\") pod \"74bd299a-42ac-4c5a-93ff-5809da5517b3\" (UID: \"74bd299a-42ac-4c5a-93ff-5809da5517b3\") " Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.507023 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74bd299a-42ac-4c5a-93ff-5809da5517b3-config-volume" (OuterVolumeSpecName: "config-volume") pod "74bd299a-42ac-4c5a-93ff-5809da5517b3" (UID: "74bd299a-42ac-4c5a-93ff-5809da5517b3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.511891 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74bd299a-42ac-4c5a-93ff-5809da5517b3-kube-api-access-tzhc4" (OuterVolumeSpecName: "kube-api-access-tzhc4") pod "74bd299a-42ac-4c5a-93ff-5809da5517b3" (UID: "74bd299a-42ac-4c5a-93ff-5809da5517b3"). InnerVolumeSpecName "kube-api-access-tzhc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.512779 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74bd299a-42ac-4c5a-93ff-5809da5517b3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "74bd299a-42ac-4c5a-93ff-5809da5517b3" (UID: "74bd299a-42ac-4c5a-93ff-5809da5517b3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.607898 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzhc4\" (UniqueName: \"kubernetes.io/projected/74bd299a-42ac-4c5a-93ff-5809da5517b3-kube-api-access-tzhc4\") on node \"crc\" DevicePath \"\"" Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.608143 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74bd299a-42ac-4c5a-93ff-5809da5517b3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 11:45:03 crc kubenswrapper[4922]: I0218 11:45:03.608235 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/74bd299a-42ac-4c5a-93ff-5809da5517b3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 11:45:04 crc kubenswrapper[4922]: I0218 11:45:04.112802 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" event={"ID":"74bd299a-42ac-4c5a-93ff-5809da5517b3","Type":"ContainerDied","Data":"c7c0dc7121ed5ba5ab0dbf0029cc1d1f88e3c9c5a36c6287bbf69b39c0f6db47"} Feb 18 11:45:04 crc kubenswrapper[4922]: I0218 11:45:04.113122 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7c0dc7121ed5ba5ab0dbf0029cc1d1f88e3c9c5a36c6287bbf69b39c0f6db47" Feb 18 11:45:04 crc kubenswrapper[4922]: I0218 11:45:04.112859 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4" Feb 18 11:46:09 crc kubenswrapper[4922]: I0218 11:46:09.808157 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:46:09 crc kubenswrapper[4922]: I0218 11:46:09.808918 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:46:39 crc kubenswrapper[4922]: I0218 11:46:39.254852 4922 scope.go:117] "RemoveContainer" containerID="bebc46c7f9271b765da6413fad509e6ec5685b8951728d70348b293df49ce847" Feb 18 11:46:39 crc kubenswrapper[4922]: I0218 11:46:39.807657 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:46:39 crc kubenswrapper[4922]: I0218 11:46:39.808123 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.266474 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw"] Feb 18 11:47:09 crc kubenswrapper[4922]: E0218 11:47:09.267241 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bd299a-42ac-4c5a-93ff-5809da5517b3" containerName="collect-profiles" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.267257 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bd299a-42ac-4c5a-93ff-5809da5517b3" containerName="collect-profiles" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.267400 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="74bd299a-42ac-4c5a-93ff-5809da5517b3" containerName="collect-profiles" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.267834 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.272474 4922 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-m7rrs" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.272663 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.272959 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.282283 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw"] Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.288094 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-tq4pt"] Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.288863 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-tq4pt" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.293590 4922 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-d6vgr" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.302867 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-tq4pt"] Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.303916 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttg7v\" (UniqueName: \"kubernetes.io/projected/906da7e7-ffe0-496f-bfb4-a76c2c14589e-kube-api-access-ttg7v\") pod \"cert-manager-858654f9db-tq4pt\" (UID: \"906da7e7-ffe0-496f-bfb4-a76c2c14589e\") " pod="cert-manager/cert-manager-858654f9db-tq4pt" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.303985 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzkj\" (UniqueName: \"kubernetes.io/projected/c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1-kube-api-access-gzzkj\") pod \"cert-manager-cainjector-cf98fcc89-wlvsw\" (UID: \"c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.320306 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vvgzd"] Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.320973 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.322930 4922 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l4z5h" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.328277 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vvgzd"] Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.405972 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttg7v\" (UniqueName: \"kubernetes.io/projected/906da7e7-ffe0-496f-bfb4-a76c2c14589e-kube-api-access-ttg7v\") pod \"cert-manager-858654f9db-tq4pt\" (UID: \"906da7e7-ffe0-496f-bfb4-a76c2c14589e\") " pod="cert-manager/cert-manager-858654f9db-tq4pt" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.406082 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h6r4\" (UniqueName: \"kubernetes.io/projected/04a66d89-6415-45c5-b87b-b3730678eac4-kube-api-access-6h6r4\") pod \"cert-manager-webhook-687f57d79b-vvgzd\" (UID: \"04a66d89-6415-45c5-b87b-b3730678eac4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.406127 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzzkj\" (UniqueName: \"kubernetes.io/projected/c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1-kube-api-access-gzzkj\") pod \"cert-manager-cainjector-cf98fcc89-wlvsw\" (UID: \"c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.424999 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzzkj\" (UniqueName: \"kubernetes.io/projected/c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1-kube-api-access-gzzkj\") pod \"cert-manager-cainjector-cf98fcc89-wlvsw\" (UID: \"c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.425340 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttg7v\" (UniqueName: \"kubernetes.io/projected/906da7e7-ffe0-496f-bfb4-a76c2c14589e-kube-api-access-ttg7v\") pod \"cert-manager-858654f9db-tq4pt\" (UID: \"906da7e7-ffe0-496f-bfb4-a76c2c14589e\") " pod="cert-manager/cert-manager-858654f9db-tq4pt" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.507553 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h6r4\" (UniqueName: \"kubernetes.io/projected/04a66d89-6415-45c5-b87b-b3730678eac4-kube-api-access-6h6r4\") pod \"cert-manager-webhook-687f57d79b-vvgzd\" (UID: \"04a66d89-6415-45c5-b87b-b3730678eac4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.524081 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h6r4\" (UniqueName: \"kubernetes.io/projected/04a66d89-6415-45c5-b87b-b3730678eac4-kube-api-access-6h6r4\") pod \"cert-manager-webhook-687f57d79b-vvgzd\" (UID: \"04a66d89-6415-45c5-b87b-b3730678eac4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.589256 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.608522 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-tq4pt" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.637455 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.807227 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.807555 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.807603 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.808166 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02db91f6fc3d787614fdd835995da44a7dcbace8afdf71101bf73a1aefb8d53b"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.808229 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://02db91f6fc3d787614fdd835995da44a7dcbace8afdf71101bf73a1aefb8d53b" gracePeriod=600 Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.823642 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw"] Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.851354 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 11:47:09 crc kubenswrapper[4922]: I0218 11:47:09.919807 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw" event={"ID":"c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1","Type":"ContainerStarted","Data":"09a208642ab1c9ffa1bb4ad1898807211597d5a3837c02d54769debc7d48b28b"} Feb 18 11:47:10 crc kubenswrapper[4922]: I0218 11:47:10.054792 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-tq4pt"] Feb 18 11:47:10 crc kubenswrapper[4922]: W0218 11:47:10.061684 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906da7e7_ffe0_496f_bfb4_a76c2c14589e.slice/crio-1a4c95f1fb5ea5125698f3168e43d4b01d01d12d637c5d933a214c1de67fcc3c WatchSource:0}: Error finding container 1a4c95f1fb5ea5125698f3168e43d4b01d01d12d637c5d933a214c1de67fcc3c: Status 404 returned error can't find the container with id 1a4c95f1fb5ea5125698f3168e43d4b01d01d12d637c5d933a214c1de67fcc3c Feb 18 11:47:10 crc kubenswrapper[4922]: I0218 11:47:10.071528 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vvgzd"] Feb 18 11:47:10 crc kubenswrapper[4922]: W0218 11:47:10.077013 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04a66d89_6415_45c5_b87b_b3730678eac4.slice/crio-fc2dab564f4f064235f842294b88f1bf319022b8f4f328947753588d3d3d7e2b WatchSource:0}: Error finding container fc2dab564f4f064235f842294b88f1bf319022b8f4f328947753588d3d3d7e2b: Status 404 returned error can't find the container with id fc2dab564f4f064235f842294b88f1bf319022b8f4f328947753588d3d3d7e2b Feb 18 11:47:10 crc kubenswrapper[4922]: I0218 11:47:10.930475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-tq4pt" event={"ID":"906da7e7-ffe0-496f-bfb4-a76c2c14589e","Type":"ContainerStarted","Data":"1a4c95f1fb5ea5125698f3168e43d4b01d01d12d637c5d933a214c1de67fcc3c"} Feb 18 11:47:10 crc kubenswrapper[4922]: I0218 11:47:10.932365 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" event={"ID":"04a66d89-6415-45c5-b87b-b3730678eac4","Type":"ContainerStarted","Data":"fc2dab564f4f064235f842294b88f1bf319022b8f4f328947753588d3d3d7e2b"} Feb 18 11:47:10 crc kubenswrapper[4922]: I0218 11:47:10.934823 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="02db91f6fc3d787614fdd835995da44a7dcbace8afdf71101bf73a1aefb8d53b" exitCode=0 Feb 18 11:47:10 crc kubenswrapper[4922]: I0218 11:47:10.934853 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"02db91f6fc3d787614fdd835995da44a7dcbace8afdf71101bf73a1aefb8d53b"} Feb 18 11:47:10 crc kubenswrapper[4922]: I0218 11:47:10.934870 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"0ea4c69ba94e3a69c2a9d6932ead886d1aa8f5af4ec72d79e294ae3c3d8f54dd"} Feb 18 11:47:10 crc kubenswrapper[4922]: I0218 11:47:10.934888 4922 scope.go:117] "RemoveContainer" containerID="f1a0cd1a4059d80457c2ea470bb511d4a674a1cbfe4b38356d0636a94629e11f" Feb 18 11:47:13 crc kubenswrapper[4922]: I0218 11:47:13.953647 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-tq4pt" event={"ID":"906da7e7-ffe0-496f-bfb4-a76c2c14589e","Type":"ContainerStarted","Data":"ae48bee2f8aea520c7c10505f58d8792b19e7e79ab4e2cd633bd5f8662fe7286"} Feb 18 11:47:13 crc kubenswrapper[4922]: I0218 11:47:13.955338 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" event={"ID":"04a66d89-6415-45c5-b87b-b3730678eac4","Type":"ContainerStarted","Data":"083bfbb79085eaf04357a5635ba7f41cd159a66de64b81163224bf6a927bd2d5"} Feb 18 11:47:13 crc kubenswrapper[4922]: I0218 11:47:13.955434 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" Feb 18 11:47:13 crc kubenswrapper[4922]: I0218 11:47:13.956835 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw" event={"ID":"c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1","Type":"ContainerStarted","Data":"58199038b740d8e74d83d11707f647becfb04c24df12d6df6c4138b5211ba652"} Feb 18 11:47:13 crc kubenswrapper[4922]: I0218 11:47:13.975324 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-tq4pt" podStartSLOduration=1.9179215410000001 podStartE2EDuration="4.975291513s" podCreationTimestamp="2026-02-18 11:47:09 +0000 UTC" firstStartedPulling="2026-02-18 11:47:10.062864617 +0000 UTC m=+631.790568697" lastFinishedPulling="2026-02-18 11:47:13.120234579 +0000 UTC m=+634.847938669" observedRunningTime="2026-02-18 11:47:13.970854073 +0000 UTC m=+635.698558163" watchObservedRunningTime="2026-02-18 11:47:13.975291513 +0000 UTC m=+635.702995613" Feb 18 11:47:14 crc kubenswrapper[4922]: I0218 11:47:14.001333 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wlvsw" podStartSLOduration=3.0104365299999998 podStartE2EDuration="5.001311667s" podCreationTimestamp="2026-02-18 11:47:09 +0000 UTC" firstStartedPulling="2026-02-18 11:47:09.851092405 +0000 UTC m=+631.578796485" lastFinishedPulling="2026-02-18 11:47:11.841967552 +0000 UTC m=+633.569671622" observedRunningTime="2026-02-18 11:47:14.000464906 +0000 UTC m=+635.728169016" watchObservedRunningTime="2026-02-18 11:47:14.001311667 +0000 UTC m=+635.729015777" Feb 18 11:47:14 crc kubenswrapper[4922]: I0218 11:47:14.019314 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" podStartSLOduration=2.039468129 podStartE2EDuration="5.019296052s" podCreationTimestamp="2026-02-18 11:47:09 +0000 UTC" firstStartedPulling="2026-02-18 11:47:10.080410791 +0000 UTC m=+631.808114881" lastFinishedPulling="2026-02-18 11:47:13.060238724 +0000 UTC m=+634.787942804" observedRunningTime="2026-02-18 11:47:14.016183405 +0000 UTC m=+635.743887495" watchObservedRunningTime="2026-02-18 11:47:14.019296052 +0000 UTC m=+635.747000142" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.364629 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wg4r5"] Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.366134 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovn-controller" containerID="cri-o://9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e" gracePeriod=30 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.366343 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="northd" containerID="cri-o://90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16" gracePeriod=30 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.366510 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b" gracePeriod=30 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.366582 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kube-rbac-proxy-node" containerID="cri-o://3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844" gracePeriod=30 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.366640 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovn-acl-logging" containerID="cri-o://fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7" gracePeriod=30 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.366756 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="sbdb" containerID="cri-o://e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c" gracePeriod=30 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.366195 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="nbdb" containerID="cri-o://11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6" gracePeriod=30 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.413853 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" containerID="cri-o://d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" gracePeriod=30 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.640871 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-vvgzd" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.707775 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/3.log" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.709769 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovn-acl-logging/0.log" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.710320 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovn-controller/0.log" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.711402 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766351 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t47sv"] Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766567 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766581 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766591 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="sbdb" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766597 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="sbdb" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766607 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kubecfg-setup" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766615 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kubecfg-setup" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766623 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766629 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766639 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovn-acl-logging" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766645 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovn-acl-logging" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766658 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovn-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766664 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovn-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766671 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766677 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766684 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kube-rbac-proxy-node" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766690 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kube-rbac-proxy-node" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766696 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="nbdb" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766702 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="nbdb" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766709 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766715 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.766723 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="northd" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766729 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="northd" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766834 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766843 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766850 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="northd" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766856 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766863 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766871 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="kube-rbac-proxy-node" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766879 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="sbdb" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766884 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766891 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="nbdb" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766899 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766909 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovn-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.766918 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovn-acl-logging" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.767004 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.767012 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: E0218 11:47:19.767023 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.767031 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerName="ovnkube-controller" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.768749 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.769625 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-log-socket\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.769901 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxc8\" (UniqueName: \"kubernetes.io/projected/7b826f3a-fb9a-4cf2-a4de-c6a394001583-kube-api-access-wgxc8\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770016 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-cni-bin\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770125 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-slash\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770215 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-var-lib-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770293 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-run-ovn-kubernetes\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770394 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-kubelet\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770479 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-ovn\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770570 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-systemd\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770656 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovnkube-config\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770742 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-env-overrides\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770854 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-systemd-units\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.770942 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-etc-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.771040 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-node-log\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.771130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.771259 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-run-netns\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.772090 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovn-node-metrics-cert\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.772240 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.772668 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovnkube-script-lib\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.772804 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-cni-netd\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.873589 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-node-log\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.873847 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-netd\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.873923 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-bin\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.873986 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874052 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-etc-openvswitch\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874115 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-netns\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874176 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-systemd-units\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874239 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-var-lib-openvswitch\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874304 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-script-lib\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874392 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-openvswitch\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874468 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-ovn-kubernetes\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874532 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-config\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874596 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-ovn\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874664 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26p2s\" (UniqueName: \"kubernetes.io/projected/653a41bb-bb1d-421c-a92b-7f2811d95edf-kube-api-access-26p2s\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874733 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-env-overrides\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874793 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-log-socket\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874854 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-slash\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874922 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-kubelet\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.874984 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-systemd\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.875049 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovn-node-metrics-cert\") pod \"653a41bb-bb1d-421c-a92b-7f2811d95edf\" (UID: \"653a41bb-bb1d-421c-a92b-7f2811d95edf\") " Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876042 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-node-log\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876227 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876265 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-node-log" (OuterVolumeSpecName: "node-log") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876279 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876271 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-slash" (OuterVolumeSpecName: "host-slash") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876293 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876298 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876306 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876315 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876322 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876329 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876349 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876381 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-log-socket" (OuterVolumeSpecName: "log-socket") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876433 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876718 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876729 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876758 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876776 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.876748 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-node-log\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877380 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877712 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-run-netns\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877744 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877765 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovn-node-metrics-cert\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877790 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovnkube-script-lib\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877822 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-cni-netd\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877875 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-log-socket\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877921 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgxc8\" (UniqueName: \"kubernetes.io/projected/7b826f3a-fb9a-4cf2-a4de-c6a394001583-kube-api-access-wgxc8\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.877945 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-cni-bin\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878028 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-slash\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878054 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-var-lib-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878080 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-run-ovn-kubernetes\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878110 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-kubelet\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878129 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-ovn\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878166 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-systemd\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878205 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovnkube-config\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878230 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-env-overrides\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878252 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-systemd-units\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878280 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-etc-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878352 4922 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878385 4922 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878399 4922 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878413 4922 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878426 4922 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878440 4922 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878453 4922 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878464 4922 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878475 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878486 4922 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878520 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-slash\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878545 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878585 4922 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878588 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-kubelet\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878602 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-systemd\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878585 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-etc-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878614 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-var-lib-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878625 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-run-ovn-kubernetes\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878597 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-log-socket\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878585 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-ovn\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878591 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-systemd-units\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878619 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-run-netns\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878636 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-cni-bin\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878938 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-run-openvswitch\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.878994 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b826f3a-fb9a-4cf2-a4de-c6a394001583-host-cni-netd\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879020 4922 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879036 4922 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879047 4922 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/653a41bb-bb1d-421c-a92b-7f2811d95edf-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879057 4922 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879067 4922 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879077 4922 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879243 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-env-overrides\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879305 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovnkube-script-lib\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.879573 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovnkube-config\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.881265 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653a41bb-bb1d-421c-a92b-7f2811d95edf-kube-api-access-26p2s" (OuterVolumeSpecName: "kube-api-access-26p2s") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "kube-api-access-26p2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.881388 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.882055 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b826f3a-fb9a-4cf2-a4de-c6a394001583-ovn-node-metrics-cert\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.888496 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "653a41bb-bb1d-421c-a92b-7f2811d95edf" (UID: "653a41bb-bb1d-421c-a92b-7f2811d95edf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.893321 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgxc8\" (UniqueName: \"kubernetes.io/projected/7b826f3a-fb9a-4cf2-a4de-c6a394001583-kube-api-access-wgxc8\") pod \"ovnkube-node-t47sv\" (UID: \"7b826f3a-fb9a-4cf2-a4de-c6a394001583\") " pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.980022 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26p2s\" (UniqueName: \"kubernetes.io/projected/653a41bb-bb1d-421c-a92b-7f2811d95edf-kube-api-access-26p2s\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.980071 4922 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/653a41bb-bb1d-421c-a92b-7f2811d95edf-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.980091 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/653a41bb-bb1d-421c-a92b-7f2811d95edf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.994229 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovnkube-controller/3.log" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.996711 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovn-acl-logging/0.log" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.997461 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg4r5_653a41bb-bb1d-421c-a92b-7f2811d95edf/ovn-controller/0.log" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.997952 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" exitCode=0 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.997983 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c" exitCode=0 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.997994 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6" exitCode=0 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998004 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16" exitCode=0 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998013 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b" exitCode=0 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998022 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844" exitCode=0 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998030 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7" exitCode=143 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998042 4922 generic.go:334] "Generic (PLEG): container finished" podID="653a41bb-bb1d-421c-a92b-7f2811d95edf" containerID="9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e" exitCode=143 Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998049 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998057 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998119 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998141 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998164 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998183 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998204 4922 scope.go:117] "RemoveContainer" containerID="d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998237 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998262 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998283 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998297 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998381 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998397 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998408 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998419 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998461 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998474 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998491 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998509 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998929 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998943 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998954 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.998996 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999008 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999019 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999030 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999040 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999051 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999262 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999285 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999298 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999309 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999321 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999332 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999344 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999355 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999397 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999408 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999418 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999436 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg4r5" event={"ID":"653a41bb-bb1d-421c-a92b-7f2811d95edf","Type":"ContainerDied","Data":"925e7e67a4fbe78eeed080fb6248ee3ea896e9f6ac32d11416ce15ab5e43d0fb"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999452 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999465 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999476 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999487 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999498 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999509 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999524 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999535 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999546 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} Feb 18 11:47:19 crc kubenswrapper[4922]: I0218 11:47:19.999556 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.001332 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/2.log" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.002089 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/1.log" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.002178 4922 generic.go:334] "Generic (PLEG): container finished" podID="9b4595ac-c521-4ada-950d-e1b01cdff99b" containerID="7270cd2a97c0ebd334ca9515a9ea50d1714e52da4915c1ade982166b94d940c9" exitCode=2 Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.002250 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c9xzd" event={"ID":"9b4595ac-c521-4ada-950d-e1b01cdff99b","Type":"ContainerDied","Data":"7270cd2a97c0ebd334ca9515a9ea50d1714e52da4915c1ade982166b94d940c9"} Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.002307 4922 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765"} Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.003131 4922 scope.go:117] "RemoveContainer" containerID="7270cd2a97c0ebd334ca9515a9ea50d1714e52da4915c1ade982166b94d940c9" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.003519 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c9xzd_openshift-multus(9b4595ac-c521-4ada-950d-e1b01cdff99b)\"" pod="openshift-multus/multus-c9xzd" podUID="9b4595ac-c521-4ada-950d-e1b01cdff99b" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.016433 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.037644 4922 scope.go:117] "RemoveContainer" containerID="e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.059154 4922 scope.go:117] "RemoveContainer" containerID="11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.061484 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wg4r5"] Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.068601 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wg4r5"] Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.080605 4922 scope.go:117] "RemoveContainer" containerID="90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.087882 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.094499 4922 scope.go:117] "RemoveContainer" containerID="eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.121538 4922 scope.go:117] "RemoveContainer" containerID="3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.141313 4922 scope.go:117] "RemoveContainer" containerID="fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.161102 4922 scope.go:117] "RemoveContainer" containerID="9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.180917 4922 scope.go:117] "RemoveContainer" containerID="64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.196820 4922 scope.go:117] "RemoveContainer" containerID="d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.197271 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": container with ID starting with d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3 not found: ID does not exist" containerID="d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.197347 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} err="failed to get container status \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": rpc error: code = NotFound desc = could not find container \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": container with ID starting with d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.197415 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.197833 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": container with ID starting with 1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081 not found: ID does not exist" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.197887 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} err="failed to get container status \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": rpc error: code = NotFound desc = could not find container \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": container with ID starting with 1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.197925 4922 scope.go:117] "RemoveContainer" containerID="e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.198180 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": container with ID starting with e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c not found: ID does not exist" containerID="e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.198209 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} err="failed to get container status \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": rpc error: code = NotFound desc = could not find container \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": container with ID starting with e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.198229 4922 scope.go:117] "RemoveContainer" containerID="11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.198642 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": container with ID starting with 11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6 not found: ID does not exist" containerID="11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.198675 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} err="failed to get container status \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": rpc error: code = NotFound desc = could not find container \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": container with ID starting with 11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.198694 4922 scope.go:117] "RemoveContainer" containerID="90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.198932 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": container with ID starting with 90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16 not found: ID does not exist" containerID="90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.198964 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} err="failed to get container status \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": rpc error: code = NotFound desc = could not find container \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": container with ID starting with 90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.198985 4922 scope.go:117] "RemoveContainer" containerID="eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.199380 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": container with ID starting with eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b not found: ID does not exist" containerID="eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.199412 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} err="failed to get container status \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": rpc error: code = NotFound desc = could not find container \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": container with ID starting with eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.199432 4922 scope.go:117] "RemoveContainer" containerID="3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.199858 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": container with ID starting with 3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844 not found: ID does not exist" containerID="3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.199892 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} err="failed to get container status \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": rpc error: code = NotFound desc = could not find container \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": container with ID starting with 3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.199914 4922 scope.go:117] "RemoveContainer" containerID="fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.200126 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": container with ID starting with fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7 not found: ID does not exist" containerID="fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.200156 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} err="failed to get container status \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": rpc error: code = NotFound desc = could not find container \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": container with ID starting with fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.200176 4922 scope.go:117] "RemoveContainer" containerID="9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.200418 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": container with ID starting with 9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e not found: ID does not exist" containerID="9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.200454 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} err="failed to get container status \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": rpc error: code = NotFound desc = could not find container \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": container with ID starting with 9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.200508 4922 scope.go:117] "RemoveContainer" containerID="64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794" Feb 18 11:47:20 crc kubenswrapper[4922]: E0218 11:47:20.200776 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": container with ID starting with 64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794 not found: ID does not exist" containerID="64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.200801 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} err="failed to get container status \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": rpc error: code = NotFound desc = could not find container \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": container with ID starting with 64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.200817 4922 scope.go:117] "RemoveContainer" containerID="d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.201082 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} err="failed to get container status \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": rpc error: code = NotFound desc = could not find container \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": container with ID starting with d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.201104 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.201323 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} err="failed to get container status \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": rpc error: code = NotFound desc = could not find container \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": container with ID starting with 1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.201408 4922 scope.go:117] "RemoveContainer" containerID="e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.201642 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} err="failed to get container status \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": rpc error: code = NotFound desc = could not find container \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": container with ID starting with e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.201672 4922 scope.go:117] "RemoveContainer" containerID="11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.201883 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} err="failed to get container status \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": rpc error: code = NotFound desc = could not find container \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": container with ID starting with 11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.201907 4922 scope.go:117] "RemoveContainer" containerID="90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.202282 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} err="failed to get container status \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": rpc error: code = NotFound desc = could not find container \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": container with ID starting with 90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.202306 4922 scope.go:117] "RemoveContainer" containerID="eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.202594 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} err="failed to get container status \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": rpc error: code = NotFound desc = could not find container \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": container with ID starting with eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.202621 4922 scope.go:117] "RemoveContainer" containerID="3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.202821 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} err="failed to get container status \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": rpc error: code = NotFound desc = could not find container \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": container with ID starting with 3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.202844 4922 scope.go:117] "RemoveContainer" containerID="fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.203060 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} err="failed to get container status \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": rpc error: code = NotFound desc = could not find container \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": container with ID starting with fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.203080 4922 scope.go:117] "RemoveContainer" containerID="9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.203311 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} err="failed to get container status \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": rpc error: code = NotFound desc = could not find container \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": container with ID starting with 9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.203336 4922 scope.go:117] "RemoveContainer" containerID="64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.203591 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} err="failed to get container status \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": rpc error: code = NotFound desc = could not find container \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": container with ID starting with 64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.203612 4922 scope.go:117] "RemoveContainer" containerID="d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.203843 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} err="failed to get container status \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": rpc error: code = NotFound desc = could not find container \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": container with ID starting with d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.203869 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.204171 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} err="failed to get container status \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": rpc error: code = NotFound desc = could not find container \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": container with ID starting with 1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.204197 4922 scope.go:117] "RemoveContainer" containerID="e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.204447 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} err="failed to get container status \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": rpc error: code = NotFound desc = could not find container \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": container with ID starting with e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.204472 4922 scope.go:117] "RemoveContainer" containerID="11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.204838 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} err="failed to get container status \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": rpc error: code = NotFound desc = could not find container \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": container with ID starting with 11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.204861 4922 scope.go:117] "RemoveContainer" containerID="90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.205084 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} err="failed to get container status \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": rpc error: code = NotFound desc = could not find container \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": container with ID starting with 90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.205110 4922 scope.go:117] "RemoveContainer" containerID="eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.205385 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} err="failed to get container status \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": rpc error: code = NotFound desc = could not find container \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": container with ID starting with eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.205417 4922 scope.go:117] "RemoveContainer" containerID="3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.205652 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} err="failed to get container status \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": rpc error: code = NotFound desc = could not find container \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": container with ID starting with 3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.205683 4922 scope.go:117] "RemoveContainer" containerID="fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.205918 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} err="failed to get container status \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": rpc error: code = NotFound desc = could not find container \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": container with ID starting with fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.205941 4922 scope.go:117] "RemoveContainer" containerID="9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.206172 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} err="failed to get container status \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": rpc error: code = NotFound desc = could not find container \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": container with ID starting with 9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.206193 4922 scope.go:117] "RemoveContainer" containerID="64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.206539 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} err="failed to get container status \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": rpc error: code = NotFound desc = could not find container \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": container with ID starting with 64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.206560 4922 scope.go:117] "RemoveContainer" containerID="d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.206785 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} err="failed to get container status \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": rpc error: code = NotFound desc = could not find container \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": container with ID starting with d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.206805 4922 scope.go:117] "RemoveContainer" containerID="1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.207073 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081"} err="failed to get container status \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": rpc error: code = NotFound desc = could not find container \"1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081\": container with ID starting with 1d1aca901c6bb735f04a76a87f7a05d6c549b8f936a07fb1917ece2b9fdd0081 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.207093 4922 scope.go:117] "RemoveContainer" containerID="e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.207327 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c"} err="failed to get container status \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": rpc error: code = NotFound desc = could not find container \"e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c\": container with ID starting with e99aaa236f5140d0dc7d57feac05d3ed348ca3e3c21662747fc2d9fb6f6d524c not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.207349 4922 scope.go:117] "RemoveContainer" containerID="11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.207631 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6"} err="failed to get container status \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": rpc error: code = NotFound desc = could not find container \"11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6\": container with ID starting with 11cc7edb64eebf72aeab109cbba4dfecbb601f3e9f585237b8e57b66896ec0f6 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.207654 4922 scope.go:117] "RemoveContainer" containerID="90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.207915 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16"} err="failed to get container status \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": rpc error: code = NotFound desc = could not find container \"90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16\": container with ID starting with 90386bc683dad72409b376571ec9798e81dde30ffde519bc4a97952520449c16 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.207936 4922 scope.go:117] "RemoveContainer" containerID="eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.208200 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b"} err="failed to get container status \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": rpc error: code = NotFound desc = could not find container \"eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b\": container with ID starting with eec020def8c057658795c1b83804239fb6285e4bbcd3ba3a1fe05368365fca3b not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.208221 4922 scope.go:117] "RemoveContainer" containerID="3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.208522 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844"} err="failed to get container status \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": rpc error: code = NotFound desc = could not find container \"3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844\": container with ID starting with 3fe29d47bb107a4073f06d833a5f436aa6746bf2c47dd7c820f9ac2e4308a844 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.208543 4922 scope.go:117] "RemoveContainer" containerID="fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.208759 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7"} err="failed to get container status \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": rpc error: code = NotFound desc = could not find container \"fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7\": container with ID starting with fcda158beee470e32bf198a44c8cc4e4433345e4000dcefdd16252c0aa3529e7 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.208782 4922 scope.go:117] "RemoveContainer" containerID="9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.208985 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e"} err="failed to get container status \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": rpc error: code = NotFound desc = could not find container \"9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e\": container with ID starting with 9cbef294a9d462e0843258405c7892152327a707bc2577f06eedfb7ecf13e20e not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.209005 4922 scope.go:117] "RemoveContainer" containerID="64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.209234 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794"} err="failed to get container status \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": rpc error: code = NotFound desc = could not find container \"64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794\": container with ID starting with 64c67cddfeb39e5545606a34a3bd8080de4a210b966a185824b2d374b880c794 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.209260 4922 scope.go:117] "RemoveContainer" containerID="d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.209495 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3"} err="failed to get container status \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": rpc error: code = NotFound desc = could not find container \"d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3\": container with ID starting with d40ac6f2baab7bfed2de70b92b02e9fd3903b69815200ff68c4655a0de9763f3 not found: ID does not exist" Feb 18 11:47:20 crc kubenswrapper[4922]: I0218 11:47:20.983452 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653a41bb-bb1d-421c-a92b-7f2811d95edf" path="/var/lib/kubelet/pods/653a41bb-bb1d-421c-a92b-7f2811d95edf/volumes" Feb 18 11:47:21 crc kubenswrapper[4922]: I0218 11:47:21.012264 4922 generic.go:334] "Generic (PLEG): container finished" podID="7b826f3a-fb9a-4cf2-a4de-c6a394001583" containerID="a87f4af54fb6c7a7e63fe8e4acaf8b08557715517bd1538e2e62c06cd4396cfd" exitCode=0 Feb 18 11:47:21 crc kubenswrapper[4922]: I0218 11:47:21.012412 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerDied","Data":"a87f4af54fb6c7a7e63fe8e4acaf8b08557715517bd1538e2e62c06cd4396cfd"} Feb 18 11:47:21 crc kubenswrapper[4922]: I0218 11:47:21.012488 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"6e8d88074466e16ebb906dc20fa1fa6238859abee7bbaaee0417428d015ebce7"} Feb 18 11:47:22 crc kubenswrapper[4922]: I0218 11:47:22.033427 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"03636054b3e91332aab32a780f232662c130a45f2305252b2a7452d92b262e58"} Feb 18 11:47:22 crc kubenswrapper[4922]: I0218 11:47:22.033816 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"b2296cbb18d92c80b6c70ee9bdd4f34f0480a8c243f77be6f8dba16b4259d9ad"} Feb 18 11:47:22 crc kubenswrapper[4922]: I0218 11:47:22.033836 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"23785c92de84daadcefbcd93677f741a2911c038275116c42a870b94d15b1bb8"} Feb 18 11:47:22 crc kubenswrapper[4922]: I0218 11:47:22.033854 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"e4604063944727e2aedb2e56cde174699f16b0cd110353c8b5e9a2ffb95dd3f5"} Feb 18 11:47:22 crc kubenswrapper[4922]: I0218 11:47:22.033871 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"ecf90ff0857fef7355f76a5335271a20ba8266534519b68a2c92c1f6c2e1f8ef"} Feb 18 11:47:22 crc kubenswrapper[4922]: I0218 11:47:22.033886 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"c9ca32e51a89560fb8e225d5164d676909819dfa533aae4ff676ce9c45ce279a"} Feb 18 11:47:24 crc kubenswrapper[4922]: I0218 11:47:24.055753 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"815f6e83c156cb0a0b59b42c0c854928c6b48c9fc8e462799ce3b2f5a8442bda"} Feb 18 11:47:27 crc kubenswrapper[4922]: I0218 11:47:27.081117 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" event={"ID":"7b826f3a-fb9a-4cf2-a4de-c6a394001583","Type":"ContainerStarted","Data":"b6353450a31c16913533ddf271788c3533b2b1513128991a392cbd1b434a8e8f"} Feb 18 11:47:27 crc kubenswrapper[4922]: I0218 11:47:27.081622 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:27 crc kubenswrapper[4922]: I0218 11:47:27.081648 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:27 crc kubenswrapper[4922]: I0218 11:47:27.119900 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:27 crc kubenswrapper[4922]: I0218 11:47:27.126617 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" podStartSLOduration=8.126594747 podStartE2EDuration="8.126594747s" podCreationTimestamp="2026-02-18 11:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:47:27.125156072 +0000 UTC m=+648.852860172" watchObservedRunningTime="2026-02-18 11:47:27.126594747 +0000 UTC m=+648.854298847" Feb 18 11:47:28 crc kubenswrapper[4922]: I0218 11:47:28.086643 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:28 crc kubenswrapper[4922]: I0218 11:47:28.111220 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:47:33 crc kubenswrapper[4922]: I0218 11:47:33.974103 4922 scope.go:117] "RemoveContainer" containerID="7270cd2a97c0ebd334ca9515a9ea50d1714e52da4915c1ade982166b94d940c9" Feb 18 11:47:33 crc kubenswrapper[4922]: E0218 11:47:33.974833 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-c9xzd_openshift-multus(9b4595ac-c521-4ada-950d-e1b01cdff99b)\"" pod="openshift-multus/multus-c9xzd" podUID="9b4595ac-c521-4ada-950d-e1b01cdff99b" Feb 18 11:47:39 crc kubenswrapper[4922]: I0218 11:47:39.304725 4922 scope.go:117] "RemoveContainer" containerID="71d0689c83ee6c0c1704fac1f69846ec7d6ef1b16479ee4eae33acefd6b84765" Feb 18 11:47:40 crc kubenswrapper[4922]: I0218 11:47:40.165837 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/2.log" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.609094 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp"] Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.610865 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.613581 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.620958 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp"] Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.730932 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.730988 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xkrb\" (UniqueName: \"kubernetes.io/projected/00938d04-ee62-4756-830e-f66e2fbaab9d-kube-api-access-8xkrb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.731300 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.832656 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.832736 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xkrb\" (UniqueName: \"kubernetes.io/projected/00938d04-ee62-4756-830e-f66e2fbaab9d-kube-api-access-8xkrb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.832808 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.833512 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.833521 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.854126 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xkrb\" (UniqueName: \"kubernetes.io/projected/00938d04-ee62-4756-830e-f66e2fbaab9d-kube-api-access-8xkrb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.958101 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:46 crc kubenswrapper[4922]: I0218 11:47:46.973043 4922 scope.go:117] "RemoveContainer" containerID="7270cd2a97c0ebd334ca9515a9ea50d1714e52da4915c1ade982166b94d940c9" Feb 18 11:47:47 crc kubenswrapper[4922]: E0218 11:47:47.008623 4922 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace_00938d04-ee62-4756-830e-f66e2fbaab9d_0(9372639c5c83c78bfb3331184249525339a100732e043822202f74150e6c666c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 11:47:47 crc kubenswrapper[4922]: E0218 11:47:47.008736 4922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace_00938d04-ee62-4756-830e-f66e2fbaab9d_0(9372639c5c83c78bfb3331184249525339a100732e043822202f74150e6c666c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:47 crc kubenswrapper[4922]: E0218 11:47:47.008776 4922 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace_00938d04-ee62-4756-830e-f66e2fbaab9d_0(9372639c5c83c78bfb3331184249525339a100732e043822202f74150e6c666c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:47 crc kubenswrapper[4922]: E0218 11:47:47.008863 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace(00938d04-ee62-4756-830e-f66e2fbaab9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace(00938d04-ee62-4756-830e-f66e2fbaab9d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace_00938d04-ee62-4756-830e-f66e2fbaab9d_0(9372639c5c83c78bfb3331184249525339a100732e043822202f74150e6c666c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" Feb 18 11:47:47 crc kubenswrapper[4922]: I0218 11:47:47.210958 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c9xzd_9b4595ac-c521-4ada-950d-e1b01cdff99b/kube-multus/2.log" Feb 18 11:47:47 crc kubenswrapper[4922]: I0218 11:47:47.211103 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:47 crc kubenswrapper[4922]: I0218 11:47:47.211138 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c9xzd" event={"ID":"9b4595ac-c521-4ada-950d-e1b01cdff99b","Type":"ContainerStarted","Data":"ca6c59f69a9b82d6a8d68ebcc74a0ebc9731c0ab9458f534d5fbbdb39210c498"} Feb 18 11:47:47 crc kubenswrapper[4922]: I0218 11:47:47.211726 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:47 crc kubenswrapper[4922]: E0218 11:47:47.237018 4922 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace_00938d04-ee62-4756-830e-f66e2fbaab9d_0(4b0c1b8effdec1f0795d00cd2d87715574e9304f5c1b194966456665a100ed32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 11:47:47 crc kubenswrapper[4922]: E0218 11:47:47.237108 4922 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace_00938d04-ee62-4756-830e-f66e2fbaab9d_0(4b0c1b8effdec1f0795d00cd2d87715574e9304f5c1b194966456665a100ed32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:47 crc kubenswrapper[4922]: E0218 11:47:47.237358 4922 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace_00938d04-ee62-4756-830e-f66e2fbaab9d_0(4b0c1b8effdec1f0795d00cd2d87715574e9304f5c1b194966456665a100ed32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:47:47 crc kubenswrapper[4922]: E0218 11:47:47.237473 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace(00938d04-ee62-4756-830e-f66e2fbaab9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace(00938d04-ee62-4756-830e-f66e2fbaab9d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_openshift-marketplace_00938d04-ee62-4756-830e-f66e2fbaab9d_0(4b0c1b8effdec1f0795d00cd2d87715574e9304f5c1b194966456665a100ed32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" Feb 18 11:47:50 crc kubenswrapper[4922]: I0218 11:47:50.118764 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t47sv" Feb 18 11:48:00 crc kubenswrapper[4922]: I0218 11:48:00.973129 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:48:00 crc kubenswrapper[4922]: I0218 11:48:00.974160 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:48:01 crc kubenswrapper[4922]: I0218 11:48:01.237270 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp"] Feb 18 11:48:01 crc kubenswrapper[4922]: W0218 11:48:01.243547 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00938d04_ee62_4756_830e_f66e2fbaab9d.slice/crio-7c71d4c0c8bc51a41b0b55cdd5a469e8c57de802ea041e37234606852613315d WatchSource:0}: Error finding container 7c71d4c0c8bc51a41b0b55cdd5a469e8c57de802ea041e37234606852613315d: Status 404 returned error can't find the container with id 7c71d4c0c8bc51a41b0b55cdd5a469e8c57de802ea041e37234606852613315d Feb 18 11:48:01 crc kubenswrapper[4922]: I0218 11:48:01.300988 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" event={"ID":"00938d04-ee62-4756-830e-f66e2fbaab9d","Type":"ContainerStarted","Data":"7c71d4c0c8bc51a41b0b55cdd5a469e8c57de802ea041e37234606852613315d"} Feb 18 11:48:02 crc kubenswrapper[4922]: I0218 11:48:02.311077 4922 generic.go:334] "Generic (PLEG): container finished" podID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerID="27c4e9ecb5dc34c78af5e6369eb74c5f7a3dbdb2f850e0b5fc0d2b7be739625d" exitCode=0 Feb 18 11:48:02 crc kubenswrapper[4922]: I0218 11:48:02.311196 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" event={"ID":"00938d04-ee62-4756-830e-f66e2fbaab9d","Type":"ContainerDied","Data":"27c4e9ecb5dc34c78af5e6369eb74c5f7a3dbdb2f850e0b5fc0d2b7be739625d"} Feb 18 11:48:05 crc kubenswrapper[4922]: I0218 11:48:05.333549 4922 generic.go:334] "Generic (PLEG): container finished" podID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerID="274001f2e7a62c9010ffb094988d109a7d7641ec448b22bffc09d6c59f27fd5e" exitCode=0 Feb 18 11:48:05 crc kubenswrapper[4922]: I0218 11:48:05.333606 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" event={"ID":"00938d04-ee62-4756-830e-f66e2fbaab9d","Type":"ContainerDied","Data":"274001f2e7a62c9010ffb094988d109a7d7641ec448b22bffc09d6c59f27fd5e"} Feb 18 11:48:06 crc kubenswrapper[4922]: I0218 11:48:06.343648 4922 generic.go:334] "Generic (PLEG): container finished" podID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerID="ff16461aa96575c79fb9785e391e62d9e7f5a2382b77e8a210b2b2e197441584" exitCode=0 Feb 18 11:48:06 crc kubenswrapper[4922]: I0218 11:48:06.343769 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" event={"ID":"00938d04-ee62-4756-830e-f66e2fbaab9d","Type":"ContainerDied","Data":"ff16461aa96575c79fb9785e391e62d9e7f5a2382b77e8a210b2b2e197441584"} Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.647738 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.747221 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-util\") pod \"00938d04-ee62-4756-830e-f66e2fbaab9d\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.747282 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-bundle\") pod \"00938d04-ee62-4756-830e-f66e2fbaab9d\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.749273 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-bundle" (OuterVolumeSpecName: "bundle") pod "00938d04-ee62-4756-830e-f66e2fbaab9d" (UID: "00938d04-ee62-4756-830e-f66e2fbaab9d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.756826 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-util" (OuterVolumeSpecName: "util") pod "00938d04-ee62-4756-830e-f66e2fbaab9d" (UID: "00938d04-ee62-4756-830e-f66e2fbaab9d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.848122 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xkrb\" (UniqueName: \"kubernetes.io/projected/00938d04-ee62-4756-830e-f66e2fbaab9d-kube-api-access-8xkrb\") pod \"00938d04-ee62-4756-830e-f66e2fbaab9d\" (UID: \"00938d04-ee62-4756-830e-f66e2fbaab9d\") " Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.848756 4922 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-util\") on node \"crc\" DevicePath \"\"" Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.849029 4922 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/00938d04-ee62-4756-830e-f66e2fbaab9d-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.854451 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00938d04-ee62-4756-830e-f66e2fbaab9d-kube-api-access-8xkrb" (OuterVolumeSpecName: "kube-api-access-8xkrb") pod "00938d04-ee62-4756-830e-f66e2fbaab9d" (UID: "00938d04-ee62-4756-830e-f66e2fbaab9d"). InnerVolumeSpecName "kube-api-access-8xkrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:48:07 crc kubenswrapper[4922]: I0218 11:48:07.950718 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xkrb\" (UniqueName: \"kubernetes.io/projected/00938d04-ee62-4756-830e-f66e2fbaab9d-kube-api-access-8xkrb\") on node \"crc\" DevicePath \"\"" Feb 18 11:48:08 crc kubenswrapper[4922]: I0218 11:48:08.360285 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" event={"ID":"00938d04-ee62-4756-830e-f66e2fbaab9d","Type":"ContainerDied","Data":"7c71d4c0c8bc51a41b0b55cdd5a469e8c57de802ea041e37234606852613315d"} Feb 18 11:48:08 crc kubenswrapper[4922]: I0218 11:48:08.360710 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c71d4c0c8bc51a41b0b55cdd5a469e8c57de802ea041e37234606852613315d" Feb 18 11:48:08 crc kubenswrapper[4922]: I0218 11:48:08.360323 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.748005 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p"] Feb 18 11:48:19 crc kubenswrapper[4922]: E0218 11:48:19.748614 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerName="extract" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.748626 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerName="extract" Feb 18 11:48:19 crc kubenswrapper[4922]: E0218 11:48:19.748646 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerName="util" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.748652 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerName="util" Feb 18 11:48:19 crc kubenswrapper[4922]: E0218 11:48:19.748661 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerName="pull" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.748667 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerName="pull" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.748753 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="00938d04-ee62-4756-830e-f66e2fbaab9d" containerName="extract" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.749073 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.750909 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.752123 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-57c68" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.754275 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.788475 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p"] Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.895221 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p"] Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.895842 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.898068 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-lpql4" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.898294 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.898291 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/333644cd-a424-47a3-b701-378149dcdc80-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p\" (UID: \"333644cd-a424-47a3-b701-378149dcdc80\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.898436 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/333644cd-a424-47a3-b701-378149dcdc80-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p\" (UID: \"333644cd-a424-47a3-b701-378149dcdc80\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.898478 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmph9\" (UniqueName: \"kubernetes.io/projected/1446ef26-f977-4255-a1b2-a42e8107303e-kube-api-access-pmph9\") pod \"obo-prometheus-operator-68bc856cb9-cq76p\" (UID: \"1446ef26-f977-4255-a1b2-a42e8107303e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.910065 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p"] Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.911725 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5"] Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.912344 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" Feb 18 11:48:19 crc kubenswrapper[4922]: I0218 11:48:19.937159 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5"] Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.000516 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/879d4ddb-47d1-4987-a980-e9f05104e5cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5\" (UID: \"879d4ddb-47d1-4987-a980-e9f05104e5cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.000582 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/333644cd-a424-47a3-b701-378149dcdc80-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p\" (UID: \"333644cd-a424-47a3-b701-378149dcdc80\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.000629 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/333644cd-a424-47a3-b701-378149dcdc80-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p\" (UID: \"333644cd-a424-47a3-b701-378149dcdc80\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.000657 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmph9\" (UniqueName: \"kubernetes.io/projected/1446ef26-f977-4255-a1b2-a42e8107303e-kube-api-access-pmph9\") pod \"obo-prometheus-operator-68bc856cb9-cq76p\" (UID: \"1446ef26-f977-4255-a1b2-a42e8107303e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.000680 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879d4ddb-47d1-4987-a980-e9f05104e5cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5\" (UID: \"879d4ddb-47d1-4987-a980-e9f05104e5cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.005973 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/333644cd-a424-47a3-b701-378149dcdc80-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p\" (UID: \"333644cd-a424-47a3-b701-378149dcdc80\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.010186 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/333644cd-a424-47a3-b701-378149dcdc80-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p\" (UID: \"333644cd-a424-47a3-b701-378149dcdc80\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.023249 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmph9\" (UniqueName: \"kubernetes.io/projected/1446ef26-f977-4255-a1b2-a42e8107303e-kube-api-access-pmph9\") pod \"obo-prometheus-operator-68bc856cb9-cq76p\" (UID: \"1446ef26-f977-4255-a1b2-a42e8107303e\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.064484 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.083136 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tkz2d"] Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.092243 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.097641 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-5m464" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.099346 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.102343 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c40ab6-7b55-410d-958e-3a6a37818c88-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tkz2d\" (UID: \"10c40ab6-7b55-410d-958e-3a6a37818c88\") " pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.102603 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/879d4ddb-47d1-4987-a980-e9f05104e5cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5\" (UID: \"879d4ddb-47d1-4987-a980-e9f05104e5cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.102707 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4vv9\" (UniqueName: \"kubernetes.io/projected/10c40ab6-7b55-410d-958e-3a6a37818c88-kube-api-access-x4vv9\") pod \"observability-operator-59bdc8b94-tkz2d\" (UID: \"10c40ab6-7b55-410d-958e-3a6a37818c88\") " pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.102845 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879d4ddb-47d1-4987-a980-e9f05104e5cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5\" (UID: \"879d4ddb-47d1-4987-a980-e9f05104e5cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.108258 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tkz2d"] Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.109901 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/879d4ddb-47d1-4987-a980-e9f05104e5cb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5\" (UID: \"879d4ddb-47d1-4987-a980-e9f05104e5cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.120326 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/879d4ddb-47d1-4987-a980-e9f05104e5cb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5\" (UID: \"879d4ddb-47d1-4987-a980-e9f05104e5cb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.174099 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-mh85w"] Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.174977 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.180840 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-q4j4d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.187111 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-mh85w"] Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.203275 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/20e893d8-cc0c-4bdf-83d6-698e08e5d82b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-mh85w\" (UID: \"20e893d8-cc0c-4bdf-83d6-698e08e5d82b\") " pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.203328 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c40ab6-7b55-410d-958e-3a6a37818c88-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tkz2d\" (UID: \"10c40ab6-7b55-410d-958e-3a6a37818c88\") " pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.203376 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4vv9\" (UniqueName: \"kubernetes.io/projected/10c40ab6-7b55-410d-958e-3a6a37818c88-kube-api-access-x4vv9\") pod \"observability-operator-59bdc8b94-tkz2d\" (UID: \"10c40ab6-7b55-410d-958e-3a6a37818c88\") " pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.203418 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49t42\" (UniqueName: \"kubernetes.io/projected/20e893d8-cc0c-4bdf-83d6-698e08e5d82b-kube-api-access-49t42\") pod \"perses-operator-5bf474d74f-mh85w\" (UID: \"20e893d8-cc0c-4bdf-83d6-698e08e5d82b\") " pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.211501 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.223104 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c40ab6-7b55-410d-958e-3a6a37818c88-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tkz2d\" (UID: \"10c40ab6-7b55-410d-958e-3a6a37818c88\") " pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.232580 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.248454 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4vv9\" (UniqueName: \"kubernetes.io/projected/10c40ab6-7b55-410d-958e-3a6a37818c88-kube-api-access-x4vv9\") pod \"observability-operator-59bdc8b94-tkz2d\" (UID: \"10c40ab6-7b55-410d-958e-3a6a37818c88\") " pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.304650 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/20e893d8-cc0c-4bdf-83d6-698e08e5d82b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-mh85w\" (UID: \"20e893d8-cc0c-4bdf-83d6-698e08e5d82b\") " pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.304777 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49t42\" (UniqueName: \"kubernetes.io/projected/20e893d8-cc0c-4bdf-83d6-698e08e5d82b-kube-api-access-49t42\") pod \"perses-operator-5bf474d74f-mh85w\" (UID: \"20e893d8-cc0c-4bdf-83d6-698e08e5d82b\") " pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.306434 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/20e893d8-cc0c-4bdf-83d6-698e08e5d82b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-mh85w\" (UID: \"20e893d8-cc0c-4bdf-83d6-698e08e5d82b\") " pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.337799 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49t42\" (UniqueName: \"kubernetes.io/projected/20e893d8-cc0c-4bdf-83d6-698e08e5d82b-kube-api-access-49t42\") pod \"perses-operator-5bf474d74f-mh85w\" (UID: \"20e893d8-cc0c-4bdf-83d6-698e08e5d82b\") " pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.337907 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p"] Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.423093 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p" event={"ID":"1446ef26-f977-4255-a1b2-a42e8107303e","Type":"ContainerStarted","Data":"51ec3eea3e9dbf3f81fde0c2533d89180c44631d8326fa4d3f7424ccd6c8208b"} Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.439703 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.500466 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p"] Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.509810 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.655244 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5"] Feb 18 11:48:20 crc kubenswrapper[4922]: W0218 11:48:20.660559 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod879d4ddb_47d1_4987_a980_e9f05104e5cb.slice/crio-d130111a8c10d96bda6aee9ebd583cdfe68bf855db61549867de268407fb946d WatchSource:0}: Error finding container d130111a8c10d96bda6aee9ebd583cdfe68bf855db61549867de268407fb946d: Status 404 returned error can't find the container with id d130111a8c10d96bda6aee9ebd583cdfe68bf855db61549867de268407fb946d Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.746686 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-mh85w"] Feb 18 11:48:20 crc kubenswrapper[4922]: W0218 11:48:20.749568 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20e893d8_cc0c_4bdf_83d6_698e08e5d82b.slice/crio-d6ecd55219bc848d998996159031bc8f52c458e20914bd6cc68c7fd8ea617729 WatchSource:0}: Error finding container d6ecd55219bc848d998996159031bc8f52c458e20914bd6cc68c7fd8ea617729: Status 404 returned error can't find the container with id d6ecd55219bc848d998996159031bc8f52c458e20914bd6cc68c7fd8ea617729 Feb 18 11:48:20 crc kubenswrapper[4922]: I0218 11:48:20.839350 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tkz2d"] Feb 18 11:48:21 crc kubenswrapper[4922]: I0218 11:48:21.434279 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" event={"ID":"333644cd-a424-47a3-b701-378149dcdc80","Type":"ContainerStarted","Data":"bfb3f5af5fdd25aea6b8e43188f03e9bd226ff5f52c46d835df6d64951891b75"} Feb 18 11:48:21 crc kubenswrapper[4922]: I0218 11:48:21.435633 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-mh85w" event={"ID":"20e893d8-cc0c-4bdf-83d6-698e08e5d82b","Type":"ContainerStarted","Data":"d6ecd55219bc848d998996159031bc8f52c458e20914bd6cc68c7fd8ea617729"} Feb 18 11:48:21 crc kubenswrapper[4922]: I0218 11:48:21.437178 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" event={"ID":"879d4ddb-47d1-4987-a980-e9f05104e5cb","Type":"ContainerStarted","Data":"d130111a8c10d96bda6aee9ebd583cdfe68bf855db61549867de268407fb946d"} Feb 18 11:48:21 crc kubenswrapper[4922]: I0218 11:48:21.442280 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" event={"ID":"10c40ab6-7b55-410d-958e-3a6a37818c88","Type":"ContainerStarted","Data":"5e42eb36343f27623db83714142fb72f73f55eda27d47a6db8b06cc55d0dc155"} Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.522168 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" event={"ID":"333644cd-a424-47a3-b701-378149dcdc80","Type":"ContainerStarted","Data":"2d68b0fa4fed319e58c3f11397dc751ebec3ef25550c7e39c68d1baf4f73f03c"} Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.524315 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-mh85w" event={"ID":"20e893d8-cc0c-4bdf-83d6-698e08e5d82b","Type":"ContainerStarted","Data":"6fbb52c7299b7f81e9e1da049175b542d94caa65f52eb81e4012cbc3f3e8cbba"} Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.524502 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.526467 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" event={"ID":"879d4ddb-47d1-4987-a980-e9f05104e5cb","Type":"ContainerStarted","Data":"ca02b621f7015a194f2ab4dd72d7e49a730566bb65fd0f968d756e46dc231160"} Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.528078 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" event={"ID":"10c40ab6-7b55-410d-958e-3a6a37818c88","Type":"ContainerStarted","Data":"1f8cc572431c4a7cc6dab26aab6aa0ca6ae6f2757a46873bea2e63ad26b51048"} Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.528318 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.530195 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p" event={"ID":"1446ef26-f977-4255-a1b2-a42e8107303e","Type":"ContainerStarted","Data":"7d3220e5856f5b48eb51b0bfa964b73fdd7d74ebe7a99d5d79c290c9ebf9891d"} Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.530556 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.543883 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p" podStartSLOduration=2.863131878 podStartE2EDuration="11.543859028s" podCreationTimestamp="2026-02-18 11:48:19 +0000 UTC" firstStartedPulling="2026-02-18 11:48:20.529917167 +0000 UTC m=+702.257621247" lastFinishedPulling="2026-02-18 11:48:29.210644317 +0000 UTC m=+710.938348397" observedRunningTime="2026-02-18 11:48:30.538187867 +0000 UTC m=+712.265891947" watchObservedRunningTime="2026-02-18 11:48:30.543859028 +0000 UTC m=+712.271563108" Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.562806 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5" podStartSLOduration=2.978936367 podStartE2EDuration="11.56278742s" podCreationTimestamp="2026-02-18 11:48:19 +0000 UTC" firstStartedPulling="2026-02-18 11:48:20.66311793 +0000 UTC m=+702.390822010" lastFinishedPulling="2026-02-18 11:48:29.246968983 +0000 UTC m=+710.974673063" observedRunningTime="2026-02-18 11:48:30.559915249 +0000 UTC m=+712.287619329" watchObservedRunningTime="2026-02-18 11:48:30.56278742 +0000 UTC m=+712.290491500" Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.596160 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cq76p" podStartSLOduration=2.7057864929999997 podStartE2EDuration="11.596138562s" podCreationTimestamp="2026-02-18 11:48:19 +0000 UTC" firstStartedPulling="2026-02-18 11:48:20.356960772 +0000 UTC m=+702.084664852" lastFinishedPulling="2026-02-18 11:48:29.247312841 +0000 UTC m=+710.975016921" observedRunningTime="2026-02-18 11:48:30.59405189 +0000 UTC m=+712.321755970" watchObservedRunningTime="2026-02-18 11:48:30.596138562 +0000 UTC m=+712.323842642" Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.612953 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-tkz2d" podStartSLOduration=2.1566347710000002 podStartE2EDuration="10.612930871s" podCreationTimestamp="2026-02-18 11:48:20 +0000 UTC" firstStartedPulling="2026-02-18 11:48:20.844938757 +0000 UTC m=+702.572642837" lastFinishedPulling="2026-02-18 11:48:29.301234857 +0000 UTC m=+711.028938937" observedRunningTime="2026-02-18 11:48:30.609996108 +0000 UTC m=+712.337700198" watchObservedRunningTime="2026-02-18 11:48:30.612930871 +0000 UTC m=+712.340634971" Feb 18 11:48:30 crc kubenswrapper[4922]: I0218 11:48:30.634828 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-mh85w" podStartSLOduration=2.140907029 podStartE2EDuration="10.634812437s" podCreationTimestamp="2026-02-18 11:48:20 +0000 UTC" firstStartedPulling="2026-02-18 11:48:20.751868165 +0000 UTC m=+702.479572245" lastFinishedPulling="2026-02-18 11:48:29.245773573 +0000 UTC m=+710.973477653" observedRunningTime="2026-02-18 11:48:30.634599722 +0000 UTC m=+712.362303802" watchObservedRunningTime="2026-02-18 11:48:30.634812437 +0000 UTC m=+712.362516517" Feb 18 11:48:40 crc kubenswrapper[4922]: I0218 11:48:40.512771 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-mh85w" Feb 18 11:48:56 crc kubenswrapper[4922]: I0218 11:48:56.913680 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846"] Feb 18 11:48:56 crc kubenswrapper[4922]: I0218 11:48:56.915403 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:56 crc kubenswrapper[4922]: I0218 11:48:56.917524 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 11:48:56 crc kubenswrapper[4922]: I0218 11:48:56.922726 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846"] Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.089372 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.089743 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.089833 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxm87\" (UniqueName: \"kubernetes.io/projected/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-kube-api-access-rxm87\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.191629 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.191710 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxm87\" (UniqueName: \"kubernetes.io/projected/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-kube-api-access-rxm87\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.191762 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.192184 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.192275 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.218935 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxm87\" (UniqueName: \"kubernetes.io/projected/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-kube-api-access-rxm87\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.231153 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:48:57 crc kubenswrapper[4922]: I0218 11:48:57.679342 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846"] Feb 18 11:48:58 crc kubenswrapper[4922]: I0218 11:48:58.690297 4922 generic.go:334] "Generic (PLEG): container finished" podID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerID="f317ae4fd0a8f319d576033f5e5722ebb6e82ba20cf05423570e941ca3017274" exitCode=0 Feb 18 11:48:58 crc kubenswrapper[4922]: I0218 11:48:58.690342 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" event={"ID":"b19cf8eb-c4e0-42a2-bc33-246e5c756bda","Type":"ContainerDied","Data":"f317ae4fd0a8f319d576033f5e5722ebb6e82ba20cf05423570e941ca3017274"} Feb 18 11:48:58 crc kubenswrapper[4922]: I0218 11:48:58.690405 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" event={"ID":"b19cf8eb-c4e0-42a2-bc33-246e5c756bda","Type":"ContainerStarted","Data":"0b2941e027dac645858be77cdc474ee5835f582540fe8a51fabd3794a74a1b37"} Feb 18 11:49:00 crc kubenswrapper[4922]: I0218 11:49:00.701943 4922 generic.go:334] "Generic (PLEG): container finished" podID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerID="fee21b33036842e34387a977a9de1d244aa81b9728e3adff4a2c5ef581c836c7" exitCode=0 Feb 18 11:49:00 crc kubenswrapper[4922]: I0218 11:49:00.702020 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" event={"ID":"b19cf8eb-c4e0-42a2-bc33-246e5c756bda","Type":"ContainerDied","Data":"fee21b33036842e34387a977a9de1d244aa81b9728e3adff4a2c5ef581c836c7"} Feb 18 11:49:01 crc kubenswrapper[4922]: I0218 11:49:01.714596 4922 generic.go:334] "Generic (PLEG): container finished" podID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerID="a06c5179c5c3ffcaaf7732a92ab878d68e7d526acdd98925fa5cbaf37828b776" exitCode=0 Feb 18 11:49:01 crc kubenswrapper[4922]: I0218 11:49:01.714634 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" event={"ID":"b19cf8eb-c4e0-42a2-bc33-246e5c756bda","Type":"ContainerDied","Data":"a06c5179c5c3ffcaaf7732a92ab878d68e7d526acdd98925fa5cbaf37828b776"} Feb 18 11:49:02 crc kubenswrapper[4922]: I0218 11:49:02.932957 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.074521 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-util\") pod \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.074962 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-bundle\") pod \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.075052 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxm87\" (UniqueName: \"kubernetes.io/projected/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-kube-api-access-rxm87\") pod \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\" (UID: \"b19cf8eb-c4e0-42a2-bc33-246e5c756bda\") " Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.075979 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-bundle" (OuterVolumeSpecName: "bundle") pod "b19cf8eb-c4e0-42a2-bc33-246e5c756bda" (UID: "b19cf8eb-c4e0-42a2-bc33-246e5c756bda"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.076623 4922 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.084705 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-kube-api-access-rxm87" (OuterVolumeSpecName: "kube-api-access-rxm87") pod "b19cf8eb-c4e0-42a2-bc33-246e5c756bda" (UID: "b19cf8eb-c4e0-42a2-bc33-246e5c756bda"). InnerVolumeSpecName "kube-api-access-rxm87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.099602 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-util" (OuterVolumeSpecName: "util") pod "b19cf8eb-c4e0-42a2-bc33-246e5c756bda" (UID: "b19cf8eb-c4e0-42a2-bc33-246e5c756bda"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.177748 4922 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-util\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.177786 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxm87\" (UniqueName: \"kubernetes.io/projected/b19cf8eb-c4e0-42a2-bc33-246e5c756bda-kube-api-access-rxm87\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.729631 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" event={"ID":"b19cf8eb-c4e0-42a2-bc33-246e5c756bda","Type":"ContainerDied","Data":"0b2941e027dac645858be77cdc474ee5835f582540fe8a51fabd3794a74a1b37"} Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.729689 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b2941e027dac645858be77cdc474ee5835f582540fe8a51fabd3794a74a1b37" Feb 18 11:49:03 crc kubenswrapper[4922]: I0218 11:49:03.729723 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.406994 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p7vsx"] Feb 18 11:49:05 crc kubenswrapper[4922]: E0218 11:49:05.407278 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerName="extract" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.407293 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerName="extract" Feb 18 11:49:05 crc kubenswrapper[4922]: E0218 11:49:05.407306 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerName="pull" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.407313 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerName="pull" Feb 18 11:49:05 crc kubenswrapper[4922]: E0218 11:49:05.407324 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerName="util" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.407332 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerName="util" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.407476 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19cf8eb-c4e0-42a2-bc33-246e5c756bda" containerName="extract" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.408017 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7vsx" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.411492 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.411757 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tv7x2" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.413593 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.415792 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p7vsx"] Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.606136 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4dv\" (UniqueName: \"kubernetes.io/projected/578f51b2-8e78-4720-93f6-7cd9ce17e2ed-kube-api-access-ct4dv\") pod \"nmstate-operator-694c9596b7-p7vsx\" (UID: \"578f51b2-8e78-4720-93f6-7cd9ce17e2ed\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p7vsx" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.707432 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4dv\" (UniqueName: \"kubernetes.io/projected/578f51b2-8e78-4720-93f6-7cd9ce17e2ed-kube-api-access-ct4dv\") pod \"nmstate-operator-694c9596b7-p7vsx\" (UID: \"578f51b2-8e78-4720-93f6-7cd9ce17e2ed\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p7vsx" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.725078 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4dv\" (UniqueName: \"kubernetes.io/projected/578f51b2-8e78-4720-93f6-7cd9ce17e2ed-kube-api-access-ct4dv\") pod \"nmstate-operator-694c9596b7-p7vsx\" (UID: \"578f51b2-8e78-4720-93f6-7cd9ce17e2ed\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p7vsx" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.731663 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7vsx" Feb 18 11:49:05 crc kubenswrapper[4922]: I0218 11:49:05.955067 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p7vsx"] Feb 18 11:49:06 crc kubenswrapper[4922]: I0218 11:49:06.748612 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7vsx" event={"ID":"578f51b2-8e78-4720-93f6-7cd9ce17e2ed","Type":"ContainerStarted","Data":"23fb181ec1e27b020e0dfd9aecccf9eb4b208f7f089ba0e0341f2e3be0aae32d"} Feb 18 11:49:08 crc kubenswrapper[4922]: I0218 11:49:08.760213 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7vsx" event={"ID":"578f51b2-8e78-4720-93f6-7cd9ce17e2ed","Type":"ContainerStarted","Data":"03ac8600b2ed13b54fdddd204431cde0ea9aa0540af33fc438a688c0a14c3d5e"} Feb 18 11:49:08 crc kubenswrapper[4922]: I0218 11:49:08.777923 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7vsx" podStartSLOduration=1.555292347 podStartE2EDuration="3.777905069s" podCreationTimestamp="2026-02-18 11:49:05 +0000 UTC" firstStartedPulling="2026-02-18 11:49:05.966683509 +0000 UTC m=+747.694387579" lastFinishedPulling="2026-02-18 11:49:08.189296221 +0000 UTC m=+749.917000301" observedRunningTime="2026-02-18 11:49:08.773919281 +0000 UTC m=+750.501623381" watchObservedRunningTime="2026-02-18 11:49:08.777905069 +0000 UTC m=+750.505609149" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.674350 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-2dtql"] Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.675789 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.679257 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5nqz7" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.700355 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-2dtql"] Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.715678 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv"] Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.716483 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.718183 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.738897 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-xgmj2"] Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.739706 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.745014 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv"] Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.826919 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x"] Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.827798 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.831036 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.831378 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-9rxjj" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.840534 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x"] Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.840561 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855255 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/df5bbc9b-9ba2-416b-93db-c4f6155b6906-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7mvdv\" (UID: \"df5bbc9b-9ba2-416b-93db-c4f6155b6906\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855305 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4nn\" (UniqueName: \"kubernetes.io/projected/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-kube-api-access-nm4nn\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855353 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6tlg\" (UniqueName: \"kubernetes.io/projected/4e3e71a0-5178-4016-853d-0d0c31563d99-kube-api-access-n6tlg\") pod \"nmstate-metrics-58c85c668d-2dtql\" (UID: \"4e3e71a0-5178-4016-853d-0d0c31563d99\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855501 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-ovs-socket\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855587 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855616 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs46n\" (UniqueName: \"kubernetes.io/projected/df5bbc9b-9ba2-416b-93db-c4f6155b6906-kube-api-access-hs46n\") pod \"nmstate-webhook-866bcb46dc-7mvdv\" (UID: \"df5bbc9b-9ba2-416b-93db-c4f6155b6906\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855650 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855677 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-nmstate-lock\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855823 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kxn9\" (UniqueName: \"kubernetes.io/projected/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-kube-api-access-6kxn9\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.855896 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-dbus-socket\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956407 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956461 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs46n\" (UniqueName: \"kubernetes.io/projected/df5bbc9b-9ba2-416b-93db-c4f6155b6906-kube-api-access-hs46n\") pod \"nmstate-webhook-866bcb46dc-7mvdv\" (UID: \"df5bbc9b-9ba2-416b-93db-c4f6155b6906\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956499 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956530 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-nmstate-lock\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956557 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kxn9\" (UniqueName: \"kubernetes.io/projected/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-kube-api-access-6kxn9\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956584 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-dbus-socket\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956609 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/df5bbc9b-9ba2-416b-93db-c4f6155b6906-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7mvdv\" (UID: \"df5bbc9b-9ba2-416b-93db-c4f6155b6906\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956633 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm4nn\" (UniqueName: \"kubernetes.io/projected/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-kube-api-access-nm4nn\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956684 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-nmstate-lock\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956790 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6tlg\" (UniqueName: \"kubernetes.io/projected/4e3e71a0-5178-4016-853d-0d0c31563d99-kube-api-access-n6tlg\") pod \"nmstate-metrics-58c85c668d-2dtql\" (UID: \"4e3e71a0-5178-4016-853d-0d0c31563d99\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956853 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-ovs-socket\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.957027 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-ovs-socket\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.956915 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-dbus-socket\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.958346 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.968565 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/df5bbc9b-9ba2-416b-93db-c4f6155b6906-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7mvdv\" (UID: \"df5bbc9b-9ba2-416b-93db-c4f6155b6906\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.970039 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.975052 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kxn9\" (UniqueName: \"kubernetes.io/projected/ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01-kube-api-access-6kxn9\") pod \"nmstate-handler-xgmj2\" (UID: \"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01\") " pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.978151 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm4nn\" (UniqueName: \"kubernetes.io/projected/8a41aeaf-5b15-4c8c-8abc-ad77b8e33896-kube-api-access-nm4nn\") pod \"nmstate-console-plugin-5c78fc5d65-8x52x\" (UID: \"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.981505 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6tlg\" (UniqueName: \"kubernetes.io/projected/4e3e71a0-5178-4016-853d-0d0c31563d99-kube-api-access-n6tlg\") pod \"nmstate-metrics-58c85c668d-2dtql\" (UID: \"4e3e71a0-5178-4016-853d-0d0c31563d99\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.991196 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" Feb 18 11:49:09 crc kubenswrapper[4922]: I0218 11:49:09.992866 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs46n\" (UniqueName: \"kubernetes.io/projected/df5bbc9b-9ba2-416b-93db-c4f6155b6906-kube-api-access-hs46n\") pod \"nmstate-webhook-866bcb46dc-7mvdv\" (UID: \"df5bbc9b-9ba2-416b-93db-c4f6155b6906\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.033307 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-54f4fcfcbd-86swd"] Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.039841 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.041622 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.046940 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54f4fcfcbd-86swd"] Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.058723 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-trusted-ca-bundle\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.058780 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/73c837c9-e56e-4076-a7ed-1093dc99787c-console-oauth-config\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.058832 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/73c837c9-e56e-4076-a7ed-1093dc99787c-console-serving-cert\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.058854 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-console-config\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.058874 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-service-ca\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.058891 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-oauth-serving-cert\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.058911 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h89d\" (UniqueName: \"kubernetes.io/projected/73c837c9-e56e-4076-a7ed-1093dc99787c-kube-api-access-5h89d\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.060263 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:10 crc kubenswrapper[4922]: W0218 11:49:10.118977 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea43ed5b_6735_4fd5_8fc5_1a01dcaeea01.slice/crio-b4c8ed4a8968f2756d55c4a919a88ce813ef651f9dc338bdd94d980e199a8760 WatchSource:0}: Error finding container b4c8ed4a8968f2756d55c4a919a88ce813ef651f9dc338bdd94d980e199a8760: Status 404 returned error can't find the container with id b4c8ed4a8968f2756d55c4a919a88ce813ef651f9dc338bdd94d980e199a8760 Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.143090 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.159919 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-trusted-ca-bundle\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.159977 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/73c837c9-e56e-4076-a7ed-1093dc99787c-console-oauth-config\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.160010 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/73c837c9-e56e-4076-a7ed-1093dc99787c-console-serving-cert\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.160028 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-console-config\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.160047 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-service-ca\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.160064 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-oauth-serving-cert\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.160086 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h89d\" (UniqueName: \"kubernetes.io/projected/73c837c9-e56e-4076-a7ed-1093dc99787c-kube-api-access-5h89d\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.161407 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-console-config\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.161519 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-service-ca\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.161646 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-oauth-serving-cert\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.162457 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73c837c9-e56e-4076-a7ed-1093dc99787c-trusted-ca-bundle\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.170405 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/73c837c9-e56e-4076-a7ed-1093dc99787c-console-serving-cert\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.170877 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/73c837c9-e56e-4076-a7ed-1093dc99787c-console-oauth-config\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.183301 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h89d\" (UniqueName: \"kubernetes.io/projected/73c837c9-e56e-4076-a7ed-1093dc99787c-kube-api-access-5h89d\") pod \"console-54f4fcfcbd-86swd\" (UID: \"73c837c9-e56e-4076-a7ed-1093dc99787c\") " pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.242773 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-2dtql"] Feb 18 11:49:10 crc kubenswrapper[4922]: W0218 11:49:10.261190 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e3e71a0_5178_4016_853d_0d0c31563d99.slice/crio-9a438a87856d64e8c6c0602eab77806f09019b8b4c949a60a50d07507c8afc5d WatchSource:0}: Error finding container 9a438a87856d64e8c6c0602eab77806f09019b8b4c949a60a50d07507c8afc5d: Status 404 returned error can't find the container with id 9a438a87856d64e8c6c0602eab77806f09019b8b4c949a60a50d07507c8afc5d Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.304741 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv"] Feb 18 11:49:10 crc kubenswrapper[4922]: W0218 11:49:10.312171 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf5bbc9b_9ba2_416b_93db_c4f6155b6906.slice/crio-6552cf11b2ffba1a085f7727712027bd5961c7c9a1779c6422085715b1b5b708 WatchSource:0}: Error finding container 6552cf11b2ffba1a085f7727712027bd5961c7c9a1779c6422085715b1b5b708: Status 404 returned error can't find the container with id 6552cf11b2ffba1a085f7727712027bd5961c7c9a1779c6422085715b1b5b708 Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.387814 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x"] Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.392121 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.588765 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54f4fcfcbd-86swd"] Feb 18 11:49:10 crc kubenswrapper[4922]: W0218 11:49:10.593028 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73c837c9_e56e_4076_a7ed_1093dc99787c.slice/crio-2b3f8430eb5e0a2b5aeb43f5591d8a9adaf0170d80f47e516348a127b8d0199e WatchSource:0}: Error finding container 2b3f8430eb5e0a2b5aeb43f5591d8a9adaf0170d80f47e516348a127b8d0199e: Status 404 returned error can't find the container with id 2b3f8430eb5e0a2b5aeb43f5591d8a9adaf0170d80f47e516348a127b8d0199e Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.774983 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" event={"ID":"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896","Type":"ContainerStarted","Data":"c1c0aad688c46b210d97c6b4744ee5db3393cebc6a77d3cccdce985ebeb53f97"} Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.776333 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xgmj2" event={"ID":"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01","Type":"ContainerStarted","Data":"b4c8ed4a8968f2756d55c4a919a88ce813ef651f9dc338bdd94d980e199a8760"} Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.777847 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54f4fcfcbd-86swd" event={"ID":"73c837c9-e56e-4076-a7ed-1093dc99787c","Type":"ContainerStarted","Data":"46fea00450caf74b5dab5b448ff200117f02a85055a02939633fd4ddd3f537ed"} Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.777879 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54f4fcfcbd-86swd" event={"ID":"73c837c9-e56e-4076-a7ed-1093dc99787c","Type":"ContainerStarted","Data":"2b3f8430eb5e0a2b5aeb43f5591d8a9adaf0170d80f47e516348a127b8d0199e"} Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.779219 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" event={"ID":"df5bbc9b-9ba2-416b-93db-c4f6155b6906","Type":"ContainerStarted","Data":"6552cf11b2ffba1a085f7727712027bd5961c7c9a1779c6422085715b1b5b708"} Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.780240 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" event={"ID":"4e3e71a0-5178-4016-853d-0d0c31563d99","Type":"ContainerStarted","Data":"9a438a87856d64e8c6c0602eab77806f09019b8b4c949a60a50d07507c8afc5d"} Feb 18 11:49:10 crc kubenswrapper[4922]: I0218 11:49:10.796590 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-54f4fcfcbd-86swd" podStartSLOduration=0.796569354 podStartE2EDuration="796.569354ms" podCreationTimestamp="2026-02-18 11:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:49:10.794667417 +0000 UTC m=+752.522371507" watchObservedRunningTime="2026-02-18 11:49:10.796569354 +0000 UTC m=+752.524273444" Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.354195 4922 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.799850 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" event={"ID":"df5bbc9b-9ba2-416b-93db-c4f6155b6906","Type":"ContainerStarted","Data":"8babf6eb9f2388affc816411dfa34c11f1586951e8dfefd647f4b350b7c6b593"} Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.800471 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.801433 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" event={"ID":"4e3e71a0-5178-4016-853d-0d0c31563d99","Type":"ContainerStarted","Data":"82c4c1563907ae81ba22c301e70a0195c6a4f5cd83d16f2023b7ea1f11569fc1"} Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.802725 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" event={"ID":"8a41aeaf-5b15-4c8c-8abc-ad77b8e33896","Type":"ContainerStarted","Data":"a84288c98e8ee4e1135b7c61012ecc0c85db6ff330a82a1c0f7cb1e2e0dc6979"} Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.805186 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xgmj2" event={"ID":"ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01","Type":"ContainerStarted","Data":"0e1a45db1b17d91496fe6ad090b385b45be5ad488f0b95f705b7ad4302b59e24"} Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.805375 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.820116 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" podStartSLOduration=2.095602069 podStartE2EDuration="4.820092229s" podCreationTimestamp="2026-02-18 11:49:09 +0000 UTC" firstStartedPulling="2026-02-18 11:49:10.318942442 +0000 UTC m=+752.046646522" lastFinishedPulling="2026-02-18 11:49:13.043432572 +0000 UTC m=+754.771136682" observedRunningTime="2026-02-18 11:49:13.814866461 +0000 UTC m=+755.542570531" watchObservedRunningTime="2026-02-18 11:49:13.820092229 +0000 UTC m=+755.547796309" Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.834778 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-xgmj2" podStartSLOduration=1.9015293610000001 podStartE2EDuration="4.834763409s" podCreationTimestamp="2026-02-18 11:49:09 +0000 UTC" firstStartedPulling="2026-02-18 11:49:10.121643095 +0000 UTC m=+751.849347175" lastFinishedPulling="2026-02-18 11:49:13.054877113 +0000 UTC m=+754.782581223" observedRunningTime="2026-02-18 11:49:13.833700453 +0000 UTC m=+755.561404533" watchObservedRunningTime="2026-02-18 11:49:13.834763409 +0000 UTC m=+755.562467489" Feb 18 11:49:13 crc kubenswrapper[4922]: I0218 11:49:13.852234 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8x52x" podStartSLOduration=2.211911226 podStartE2EDuration="4.852207778s" podCreationTimestamp="2026-02-18 11:49:09 +0000 UTC" firstStartedPulling="2026-02-18 11:49:10.394016266 +0000 UTC m=+752.121720346" lastFinishedPulling="2026-02-18 11:49:13.034312818 +0000 UTC m=+754.762016898" observedRunningTime="2026-02-18 11:49:13.848930467 +0000 UTC m=+755.576634567" watchObservedRunningTime="2026-02-18 11:49:13.852207778 +0000 UTC m=+755.579911858" Feb 18 11:49:15 crc kubenswrapper[4922]: I0218 11:49:15.818199 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" event={"ID":"4e3e71a0-5178-4016-853d-0d0c31563d99","Type":"ContainerStarted","Data":"fb7e680683328a66fed5134d370553e5315a803ccf7e3bdef8fd47bb90ac1508"} Feb 18 11:49:15 crc kubenswrapper[4922]: I0218 11:49:15.839909 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-2dtql" podStartSLOduration=1.905352466 podStartE2EDuration="6.83988528s" podCreationTimestamp="2026-02-18 11:49:09 +0000 UTC" firstStartedPulling="2026-02-18 11:49:10.262083585 +0000 UTC m=+751.989787665" lastFinishedPulling="2026-02-18 11:49:15.196616399 +0000 UTC m=+756.924320479" observedRunningTime="2026-02-18 11:49:15.83663062 +0000 UTC m=+757.564334790" watchObservedRunningTime="2026-02-18 11:49:15.83988528 +0000 UTC m=+757.567589390" Feb 18 11:49:20 crc kubenswrapper[4922]: I0218 11:49:20.098282 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xgmj2" Feb 18 11:49:20 crc kubenswrapper[4922]: I0218 11:49:20.393180 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:20 crc kubenswrapper[4922]: I0218 11:49:20.393259 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:20 crc kubenswrapper[4922]: I0218 11:49:20.397771 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:20 crc kubenswrapper[4922]: I0218 11:49:20.855064 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-54f4fcfcbd-86swd" Feb 18 11:49:20 crc kubenswrapper[4922]: I0218 11:49:20.904834 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nfn89"] Feb 18 11:49:30 crc kubenswrapper[4922]: I0218 11:49:30.046930 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7mvdv" Feb 18 11:49:39 crc kubenswrapper[4922]: I0218 11:49:39.807448 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:49:39 crc kubenswrapper[4922]: I0218 11:49:39.808226 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.562598 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4"] Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.564093 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.566306 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.579602 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4"] Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.631573 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8llc\" (UniqueName: \"kubernetes.io/projected/188679bc-8b67-4136-94ce-fa515c1c950a-kube-api-access-k8llc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.631644 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.631669 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.732606 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8llc\" (UniqueName: \"kubernetes.io/projected/188679bc-8b67-4136-94ce-fa515c1c950a-kube-api-access-k8llc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.732688 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.732727 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.733356 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.733743 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.753411 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8llc\" (UniqueName: \"kubernetes.io/projected/188679bc-8b67-4136-94ce-fa515c1c950a-kube-api-access-k8llc\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:44 crc kubenswrapper[4922]: I0218 11:49:44.881977 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:45 crc kubenswrapper[4922]: I0218 11:49:45.330609 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4"] Feb 18 11:49:45 crc kubenswrapper[4922]: I0218 11:49:45.946630 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-nfn89" podUID="14e81dbf-6c73-481c-b758-4c15cc0f3258" containerName="console" containerID="cri-o://af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194" gracePeriod=15 Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.042681 4922 generic.go:334] "Generic (PLEG): container finished" podID="188679bc-8b67-4136-94ce-fa515c1c950a" containerID="a0d393bb91b2595d47ee36d33109599b029b2e17f22ea41b4c78c4daf107351d" exitCode=0 Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.042741 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" event={"ID":"188679bc-8b67-4136-94ce-fa515c1c950a","Type":"ContainerDied","Data":"a0d393bb91b2595d47ee36d33109599b029b2e17f22ea41b4c78c4daf107351d"} Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.043055 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" event={"ID":"188679bc-8b67-4136-94ce-fa515c1c950a","Type":"ContainerStarted","Data":"338a8537317d0f5f601de5b9d14a10031bd912511e91d08e59143447f9685b68"} Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.289743 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8cjdv"] Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.291058 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.296289 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8cjdv"] Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.355710 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4485\" (UniqueName: \"kubernetes.io/projected/0573a6f7-8a5e-4083-8dc6-64608707229c-kube-api-access-q4485\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.355872 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-catalog-content\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.356082 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-utilities\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.416095 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nfn89_14e81dbf-6c73-481c-b758-4c15cc0f3258/console/0.log" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.416162 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.457885 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slf52\" (UniqueName: \"kubernetes.io/projected/14e81dbf-6c73-481c-b758-4c15cc0f3258-kube-api-access-slf52\") pod \"14e81dbf-6c73-481c-b758-4c15cc0f3258\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.457945 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-config\") pod \"14e81dbf-6c73-481c-b758-4c15cc0f3258\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.457978 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-serving-cert\") pod \"14e81dbf-6c73-481c-b758-4c15cc0f3258\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.457999 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-oauth-serving-cert\") pod \"14e81dbf-6c73-481c-b758-4c15cc0f3258\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.458040 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-service-ca\") pod \"14e81dbf-6c73-481c-b758-4c15cc0f3258\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.458054 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-oauth-config\") pod \"14e81dbf-6c73-481c-b758-4c15cc0f3258\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.458070 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-trusted-ca-bundle\") pod \"14e81dbf-6c73-481c-b758-4c15cc0f3258\" (UID: \"14e81dbf-6c73-481c-b758-4c15cc0f3258\") " Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.458326 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-utilities\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.458394 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4485\" (UniqueName: \"kubernetes.io/projected/0573a6f7-8a5e-4083-8dc6-64608707229c-kube-api-access-q4485\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.458427 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-catalog-content\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.459136 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-config" (OuterVolumeSpecName: "console-config") pod "14e81dbf-6c73-481c-b758-4c15cc0f3258" (UID: "14e81dbf-6c73-481c-b758-4c15cc0f3258"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.459377 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "14e81dbf-6c73-481c-b758-4c15cc0f3258" (UID: "14e81dbf-6c73-481c-b758-4c15cc0f3258"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.459660 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "14e81dbf-6c73-481c-b758-4c15cc0f3258" (UID: "14e81dbf-6c73-481c-b758-4c15cc0f3258"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.459914 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-catalog-content\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.460078 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-service-ca" (OuterVolumeSpecName: "service-ca") pod "14e81dbf-6c73-481c-b758-4c15cc0f3258" (UID: "14e81dbf-6c73-481c-b758-4c15cc0f3258"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.460461 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-utilities\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.463564 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "14e81dbf-6c73-481c-b758-4c15cc0f3258" (UID: "14e81dbf-6c73-481c-b758-4c15cc0f3258"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.465119 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e81dbf-6c73-481c-b758-4c15cc0f3258-kube-api-access-slf52" (OuterVolumeSpecName: "kube-api-access-slf52") pod "14e81dbf-6c73-481c-b758-4c15cc0f3258" (UID: "14e81dbf-6c73-481c-b758-4c15cc0f3258"). InnerVolumeSpecName "kube-api-access-slf52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.467671 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "14e81dbf-6c73-481c-b758-4c15cc0f3258" (UID: "14e81dbf-6c73-481c-b758-4c15cc0f3258"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.475427 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4485\" (UniqueName: \"kubernetes.io/projected/0573a6f7-8a5e-4083-8dc6-64608707229c-kube-api-access-q4485\") pod \"redhat-operators-8cjdv\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.559376 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slf52\" (UniqueName: \"kubernetes.io/projected/14e81dbf-6c73-481c-b758-4c15cc0f3258-kube-api-access-slf52\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.559429 4922 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.559439 4922 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.559450 4922 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.559462 4922 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.559472 4922 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/14e81dbf-6c73-481c-b758-4c15cc0f3258-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.559482 4922 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e81dbf-6c73-481c-b758-4c15cc0f3258-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.614700 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:46 crc kubenswrapper[4922]: I0218 11:49:46.818043 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8cjdv"] Feb 18 11:49:46 crc kubenswrapper[4922]: W0218 11:49:46.825323 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0573a6f7_8a5e_4083_8dc6_64608707229c.slice/crio-87d76ad8e6106ce18111899a657b4a0c77b34e29c2e101027714b79ae306611e WatchSource:0}: Error finding container 87d76ad8e6106ce18111899a657b4a0c77b34e29c2e101027714b79ae306611e: Status 404 returned error can't find the container with id 87d76ad8e6106ce18111899a657b4a0c77b34e29c2e101027714b79ae306611e Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.050190 4922 generic.go:334] "Generic (PLEG): container finished" podID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerID="518cfc0e2122ba3d8575366d8ca43a4359d78ae39ac7920bc04d33ac9f1a06a7" exitCode=0 Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.051257 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cjdv" event={"ID":"0573a6f7-8a5e-4083-8dc6-64608707229c","Type":"ContainerDied","Data":"518cfc0e2122ba3d8575366d8ca43a4359d78ae39ac7920bc04d33ac9f1a06a7"} Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.051406 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cjdv" event={"ID":"0573a6f7-8a5e-4083-8dc6-64608707229c","Type":"ContainerStarted","Data":"87d76ad8e6106ce18111899a657b4a0c77b34e29c2e101027714b79ae306611e"} Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.053813 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nfn89_14e81dbf-6c73-481c-b758-4c15cc0f3258/console/0.log" Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.053850 4922 generic.go:334] "Generic (PLEG): container finished" podID="14e81dbf-6c73-481c-b758-4c15cc0f3258" containerID="af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194" exitCode=2 Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.053899 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nfn89" event={"ID":"14e81dbf-6c73-481c-b758-4c15cc0f3258","Type":"ContainerDied","Data":"af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194"} Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.053928 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nfn89" event={"ID":"14e81dbf-6c73-481c-b758-4c15cc0f3258","Type":"ContainerDied","Data":"e3e0521f1d2586e618292c2cbf0b96f0a5f28185a418b592c2676c55e3372f97"} Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.053967 4922 scope.go:117] "RemoveContainer" containerID="af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194" Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.054156 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nfn89" Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.090244 4922 scope.go:117] "RemoveContainer" containerID="af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194" Feb 18 11:49:47 crc kubenswrapper[4922]: E0218 11:49:47.094992 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194\": container with ID starting with af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194 not found: ID does not exist" containerID="af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194" Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.095041 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194"} err="failed to get container status \"af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194\": rpc error: code = NotFound desc = could not find container \"af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194\": container with ID starting with af27bceaff8159c64269dd32349e1085ec2a3938f8c97b9f88035f96de999194 not found: ID does not exist" Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.095240 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nfn89"] Feb 18 11:49:47 crc kubenswrapper[4922]: I0218 11:49:47.101399 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-nfn89"] Feb 18 11:49:48 crc kubenswrapper[4922]: I0218 11:49:48.071448 4922 generic.go:334] "Generic (PLEG): container finished" podID="188679bc-8b67-4136-94ce-fa515c1c950a" containerID="00255652c77fcf3ae566adc81c151063bc789a070562e1934905a64c1c694b2c" exitCode=0 Feb 18 11:49:48 crc kubenswrapper[4922]: I0218 11:49:48.071543 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" event={"ID":"188679bc-8b67-4136-94ce-fa515c1c950a","Type":"ContainerDied","Data":"00255652c77fcf3ae566adc81c151063bc789a070562e1934905a64c1c694b2c"} Feb 18 11:49:48 crc kubenswrapper[4922]: I0218 11:49:48.076614 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cjdv" event={"ID":"0573a6f7-8a5e-4083-8dc6-64608707229c","Type":"ContainerStarted","Data":"a2085f95b1cea95e061e2dae349488380da4a4af1b753b095691e935490ce027"} Feb 18 11:49:48 crc kubenswrapper[4922]: I0218 11:49:48.981134 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e81dbf-6c73-481c-b758-4c15cc0f3258" path="/var/lib/kubelet/pods/14e81dbf-6c73-481c-b758-4c15cc0f3258/volumes" Feb 18 11:49:49 crc kubenswrapper[4922]: I0218 11:49:49.087204 4922 generic.go:334] "Generic (PLEG): container finished" podID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerID="a2085f95b1cea95e061e2dae349488380da4a4af1b753b095691e935490ce027" exitCode=0 Feb 18 11:49:49 crc kubenswrapper[4922]: I0218 11:49:49.087289 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cjdv" event={"ID":"0573a6f7-8a5e-4083-8dc6-64608707229c","Type":"ContainerDied","Data":"a2085f95b1cea95e061e2dae349488380da4a4af1b753b095691e935490ce027"} Feb 18 11:49:49 crc kubenswrapper[4922]: I0218 11:49:49.092115 4922 generic.go:334] "Generic (PLEG): container finished" podID="188679bc-8b67-4136-94ce-fa515c1c950a" containerID="e110bc3356ab463d5b9cc069bb258ef89272cacab9d97001eb0c4514a140e3c8" exitCode=0 Feb 18 11:49:49 crc kubenswrapper[4922]: I0218 11:49:49.092179 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" event={"ID":"188679bc-8b67-4136-94ce-fa515c1c950a","Type":"ContainerDied","Data":"e110bc3356ab463d5b9cc069bb258ef89272cacab9d97001eb0c4514a140e3c8"} Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.104010 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cjdv" event={"ID":"0573a6f7-8a5e-4083-8dc6-64608707229c","Type":"ContainerStarted","Data":"88ca51ca1d2b61ac84bdfb12a835fecc00413b6efaf7b24f5fd72c26c077f3eb"} Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.129468 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8cjdv" podStartSLOduration=1.536647785 podStartE2EDuration="4.129448402s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="2026-02-18 11:49:47.052333369 +0000 UTC m=+788.780037449" lastFinishedPulling="2026-02-18 11:49:49.645133976 +0000 UTC m=+791.372838066" observedRunningTime="2026-02-18 11:49:50.124963112 +0000 UTC m=+791.852667192" watchObservedRunningTime="2026-02-18 11:49:50.129448402 +0000 UTC m=+791.857152482" Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.362519 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.521282 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-bundle\") pod \"188679bc-8b67-4136-94ce-fa515c1c950a\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.521473 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8llc\" (UniqueName: \"kubernetes.io/projected/188679bc-8b67-4136-94ce-fa515c1c950a-kube-api-access-k8llc\") pod \"188679bc-8b67-4136-94ce-fa515c1c950a\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.521570 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-util\") pod \"188679bc-8b67-4136-94ce-fa515c1c950a\" (UID: \"188679bc-8b67-4136-94ce-fa515c1c950a\") " Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.522286 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-bundle" (OuterVolumeSpecName: "bundle") pod "188679bc-8b67-4136-94ce-fa515c1c950a" (UID: "188679bc-8b67-4136-94ce-fa515c1c950a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.529633 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/188679bc-8b67-4136-94ce-fa515c1c950a-kube-api-access-k8llc" (OuterVolumeSpecName: "kube-api-access-k8llc") pod "188679bc-8b67-4136-94ce-fa515c1c950a" (UID: "188679bc-8b67-4136-94ce-fa515c1c950a"). InnerVolumeSpecName "kube-api-access-k8llc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.556482 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-util" (OuterVolumeSpecName: "util") pod "188679bc-8b67-4136-94ce-fa515c1c950a" (UID: "188679bc-8b67-4136-94ce-fa515c1c950a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.622980 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8llc\" (UniqueName: \"kubernetes.io/projected/188679bc-8b67-4136-94ce-fa515c1c950a-kube-api-access-k8llc\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.623023 4922 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-util\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:50 crc kubenswrapper[4922]: I0218 11:49:50.623037 4922 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/188679bc-8b67-4136-94ce-fa515c1c950a-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:51 crc kubenswrapper[4922]: I0218 11:49:51.112712 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" Feb 18 11:49:51 crc kubenswrapper[4922]: I0218 11:49:51.112724 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4" event={"ID":"188679bc-8b67-4136-94ce-fa515c1c950a","Type":"ContainerDied","Data":"338a8537317d0f5f601de5b9d14a10031bd912511e91d08e59143447f9685b68"} Feb 18 11:49:51 crc kubenswrapper[4922]: I0218 11:49:51.112767 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="338a8537317d0f5f601de5b9d14a10031bd912511e91d08e59143447f9685b68" Feb 18 11:49:56 crc kubenswrapper[4922]: I0218 11:49:56.615587 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:56 crc kubenswrapper[4922]: I0218 11:49:56.616677 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:56 crc kubenswrapper[4922]: I0218 11:49:56.659166 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:57 crc kubenswrapper[4922]: I0218 11:49:57.212998 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:49:59 crc kubenswrapper[4922]: I0218 11:49:59.076554 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8cjdv"] Feb 18 11:49:59 crc kubenswrapper[4922]: I0218 11:49:59.163015 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8cjdv" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerName="registry-server" containerID="cri-o://88ca51ca1d2b61ac84bdfb12a835fecc00413b6efaf7b24f5fd72c26c077f3eb" gracePeriod=2 Feb 18 11:49:59 crc kubenswrapper[4922]: E0218 11:49:59.629731 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0573a6f7_8a5e_4083_8dc6_64608707229c.slice/crio-88ca51ca1d2b61ac84bdfb12a835fecc00413b6efaf7b24f5fd72c26c077f3eb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0573a6f7_8a5e_4083_8dc6_64608707229c.slice/crio-conmon-88ca51ca1d2b61ac84bdfb12a835fecc00413b6efaf7b24f5fd72c26c077f3eb.scope\": RecentStats: unable to find data in memory cache]" Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.175389 4922 generic.go:334] "Generic (PLEG): container finished" podID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerID="88ca51ca1d2b61ac84bdfb12a835fecc00413b6efaf7b24f5fd72c26c077f3eb" exitCode=0 Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.175431 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cjdv" event={"ID":"0573a6f7-8a5e-4083-8dc6-64608707229c","Type":"ContainerDied","Data":"88ca51ca1d2b61ac84bdfb12a835fecc00413b6efaf7b24f5fd72c26c077f3eb"} Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.295552 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.345570 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4485\" (UniqueName: \"kubernetes.io/projected/0573a6f7-8a5e-4083-8dc6-64608707229c-kube-api-access-q4485\") pod \"0573a6f7-8a5e-4083-8dc6-64608707229c\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.345754 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-catalog-content\") pod \"0573a6f7-8a5e-4083-8dc6-64608707229c\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.345823 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-utilities\") pod \"0573a6f7-8a5e-4083-8dc6-64608707229c\" (UID: \"0573a6f7-8a5e-4083-8dc6-64608707229c\") " Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.346709 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-utilities" (OuterVolumeSpecName: "utilities") pod "0573a6f7-8a5e-4083-8dc6-64608707229c" (UID: "0573a6f7-8a5e-4083-8dc6-64608707229c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.352031 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0573a6f7-8a5e-4083-8dc6-64608707229c-kube-api-access-q4485" (OuterVolumeSpecName: "kube-api-access-q4485") pod "0573a6f7-8a5e-4083-8dc6-64608707229c" (UID: "0573a6f7-8a5e-4083-8dc6-64608707229c"). InnerVolumeSpecName "kube-api-access-q4485". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.447409 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4485\" (UniqueName: \"kubernetes.io/projected/0573a6f7-8a5e-4083-8dc6-64608707229c-kube-api-access-q4485\") on node \"crc\" DevicePath \"\"" Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.447447 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.491833 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0573a6f7-8a5e-4083-8dc6-64608707229c" (UID: "0573a6f7-8a5e-4083-8dc6-64608707229c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:50:00 crc kubenswrapper[4922]: I0218 11:50:00.548217 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0573a6f7-8a5e-4083-8dc6-64608707229c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.183722 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8cjdv" event={"ID":"0573a6f7-8a5e-4083-8dc6-64608707229c","Type":"ContainerDied","Data":"87d76ad8e6106ce18111899a657b4a0c77b34e29c2e101027714b79ae306611e"} Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.183788 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8cjdv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.184017 4922 scope.go:117] "RemoveContainer" containerID="88ca51ca1d2b61ac84bdfb12a835fecc00413b6efaf7b24f5fd72c26c077f3eb" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.203717 4922 scope.go:117] "RemoveContainer" containerID="a2085f95b1cea95e061e2dae349488380da4a4af1b753b095691e935490ce027" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.208562 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8cjdv"] Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.214057 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8cjdv"] Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.220152 4922 scope.go:117] "RemoveContainer" containerID="518cfc0e2122ba3d8575366d8ca43a4359d78ae39ac7920bc04d33ac9f1a06a7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.262955 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv"] Feb 18 11:50:01 crc kubenswrapper[4922]: E0218 11:50:01.263174 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188679bc-8b67-4136-94ce-fa515c1c950a" containerName="extract" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263185 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="188679bc-8b67-4136-94ce-fa515c1c950a" containerName="extract" Feb 18 11:50:01 crc kubenswrapper[4922]: E0218 11:50:01.263201 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e81dbf-6c73-481c-b758-4c15cc0f3258" containerName="console" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263207 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e81dbf-6c73-481c-b758-4c15cc0f3258" containerName="console" Feb 18 11:50:01 crc kubenswrapper[4922]: E0218 11:50:01.263216 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerName="registry-server" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263223 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerName="registry-server" Feb 18 11:50:01 crc kubenswrapper[4922]: E0218 11:50:01.263232 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerName="extract-utilities" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263238 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerName="extract-utilities" Feb 18 11:50:01 crc kubenswrapper[4922]: E0218 11:50:01.263248 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188679bc-8b67-4136-94ce-fa515c1c950a" containerName="pull" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263254 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="188679bc-8b67-4136-94ce-fa515c1c950a" containerName="pull" Feb 18 11:50:01 crc kubenswrapper[4922]: E0218 11:50:01.263261 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerName="extract-content" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263266 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerName="extract-content" Feb 18 11:50:01 crc kubenswrapper[4922]: E0218 11:50:01.263276 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="188679bc-8b67-4136-94ce-fa515c1c950a" containerName="util" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263281 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="188679bc-8b67-4136-94ce-fa515c1c950a" containerName="util" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263394 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="188679bc-8b67-4136-94ce-fa515c1c950a" containerName="extract" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263402 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e81dbf-6c73-481c-b758-4c15cc0f3258" containerName="console" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263414 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" containerName="registry-server" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.263781 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.266078 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.266226 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.266440 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.267513 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.267725 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-vdh67" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.289080 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv"] Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.457893 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5vs5\" (UniqueName: \"kubernetes.io/projected/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-kube-api-access-x5vs5\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.457942 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-apiservice-cert\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.457967 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-webhook-cert\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.507209 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7"] Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.508444 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.510295 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9shkx" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.510734 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.512130 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.530601 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7"] Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.558722 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-apiservice-cert\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.558761 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5vs5\" (UniqueName: \"kubernetes.io/projected/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-kube-api-access-x5vs5\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.558785 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-webhook-cert\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.564487 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-apiservice-cert\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.565881 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-webhook-cert\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.576315 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5vs5\" (UniqueName: \"kubernetes.io/projected/9fbb7bfe-c8d9-4a50-9326-bf07e99f4336-kube-api-access-x5vs5\") pod \"metallb-operator-controller-manager-576949b4c-vwcqv\" (UID: \"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336\") " pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.582303 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.660521 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxw5f\" (UniqueName: \"kubernetes.io/projected/7c9c6b01-e766-411c-a275-ae7ea3a9659e-kube-api-access-mxw5f\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.660856 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c9c6b01-e766-411c-a275-ae7ea3a9659e-apiservice-cert\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.660908 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c9c6b01-e766-411c-a275-ae7ea3a9659e-webhook-cert\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.762488 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c9c6b01-e766-411c-a275-ae7ea3a9659e-apiservice-cert\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.762571 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c9c6b01-e766-411c-a275-ae7ea3a9659e-webhook-cert\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.762647 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxw5f\" (UniqueName: \"kubernetes.io/projected/7c9c6b01-e766-411c-a275-ae7ea3a9659e-kube-api-access-mxw5f\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.768645 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c9c6b01-e766-411c-a275-ae7ea3a9659e-webhook-cert\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.769980 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c9c6b01-e766-411c-a275-ae7ea3a9659e-apiservice-cert\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.780256 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxw5f\" (UniqueName: \"kubernetes.io/projected/7c9c6b01-e766-411c-a275-ae7ea3a9659e-kube-api-access-mxw5f\") pod \"metallb-operator-webhook-server-6d8c5554f7-psxr7\" (UID: \"7c9c6b01-e766-411c-a275-ae7ea3a9659e\") " pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.825430 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:01 crc kubenswrapper[4922]: I0218 11:50:01.845415 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv"] Feb 18 11:50:01 crc kubenswrapper[4922]: W0218 11:50:01.874595 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fbb7bfe_c8d9_4a50_9326_bf07e99f4336.slice/crio-0e1e249c09887447ab6ab288b159c1b353766d339abbb2bfc52339a64e5c8241 WatchSource:0}: Error finding container 0e1e249c09887447ab6ab288b159c1b353766d339abbb2bfc52339a64e5c8241: Status 404 returned error can't find the container with id 0e1e249c09887447ab6ab288b159c1b353766d339abbb2bfc52339a64e5c8241 Feb 18 11:50:02 crc kubenswrapper[4922]: I0218 11:50:02.190966 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" event={"ID":"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336","Type":"ContainerStarted","Data":"0e1e249c09887447ab6ab288b159c1b353766d339abbb2bfc52339a64e5c8241"} Feb 18 11:50:02 crc kubenswrapper[4922]: I0218 11:50:02.371094 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7"] Feb 18 11:50:02 crc kubenswrapper[4922]: W0218 11:50:02.380377 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c9c6b01_e766_411c_a275_ae7ea3a9659e.slice/crio-c2c6602927306626fdf9e4ef0dfd7403d6411d01d167010965a74fa9221cf333 WatchSource:0}: Error finding container c2c6602927306626fdf9e4ef0dfd7403d6411d01d167010965a74fa9221cf333: Status 404 returned error can't find the container with id c2c6602927306626fdf9e4ef0dfd7403d6411d01d167010965a74fa9221cf333 Feb 18 11:50:02 crc kubenswrapper[4922]: I0218 11:50:02.988301 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0573a6f7-8a5e-4083-8dc6-64608707229c" path="/var/lib/kubelet/pods/0573a6f7-8a5e-4083-8dc6-64608707229c/volumes" Feb 18 11:50:03 crc kubenswrapper[4922]: I0218 11:50:03.199809 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" event={"ID":"7c9c6b01-e766-411c-a275-ae7ea3a9659e","Type":"ContainerStarted","Data":"c2c6602927306626fdf9e4ef0dfd7403d6411d01d167010965a74fa9221cf333"} Feb 18 11:50:06 crc kubenswrapper[4922]: I0218 11:50:06.231241 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" event={"ID":"9fbb7bfe-c8d9-4a50-9326-bf07e99f4336","Type":"ContainerStarted","Data":"a8c1643848298fd562d8eb39564b149627303cc39e149e5915f0fc077e5615d3"} Feb 18 11:50:06 crc kubenswrapper[4922]: I0218 11:50:06.231641 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:06 crc kubenswrapper[4922]: I0218 11:50:06.256569 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" podStartSLOduration=1.392976516 podStartE2EDuration="5.256548466s" podCreationTimestamp="2026-02-18 11:50:01 +0000 UTC" firstStartedPulling="2026-02-18 11:50:01.883164954 +0000 UTC m=+803.610869034" lastFinishedPulling="2026-02-18 11:50:05.746736904 +0000 UTC m=+807.474440984" observedRunningTime="2026-02-18 11:50:06.253744387 +0000 UTC m=+807.981448467" watchObservedRunningTime="2026-02-18 11:50:06.256548466 +0000 UTC m=+807.984252546" Feb 18 11:50:08 crc kubenswrapper[4922]: I0218 11:50:08.252393 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" event={"ID":"7c9c6b01-e766-411c-a275-ae7ea3a9659e","Type":"ContainerStarted","Data":"0dacb45b1950eba87778f5769b3face2a5481b97a135aef796b6629ca69b4a00"} Feb 18 11:50:08 crc kubenswrapper[4922]: I0218 11:50:08.252781 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:08 crc kubenswrapper[4922]: I0218 11:50:08.272345 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" podStartSLOduration=1.773910783 podStartE2EDuration="7.272319129s" podCreationTimestamp="2026-02-18 11:50:01 +0000 UTC" firstStartedPulling="2026-02-18 11:50:02.382701684 +0000 UTC m=+804.110405764" lastFinishedPulling="2026-02-18 11:50:07.88111001 +0000 UTC m=+809.608814110" observedRunningTime="2026-02-18 11:50:08.26952243 +0000 UTC m=+809.997226600" watchObservedRunningTime="2026-02-18 11:50:08.272319129 +0000 UTC m=+810.000023219" Feb 18 11:50:09 crc kubenswrapper[4922]: I0218 11:50:09.807572 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:50:09 crc kubenswrapper[4922]: I0218 11:50:09.807892 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:50:17 crc kubenswrapper[4922]: I0218 11:50:17.080151 4922 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod14e81dbf-6c73-481c-b758-4c15cc0f3258"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod14e81dbf-6c73-481c-b758-4c15cc0f3258] : Timed out while waiting for systemd to remove kubepods-burstable-pod14e81dbf_6c73_481c_b758_4c15cc0f3258.slice" Feb 18 11:50:21 crc kubenswrapper[4922]: I0218 11:50:21.829599 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6d8c5554f7-psxr7" Feb 18 11:50:39 crc kubenswrapper[4922]: I0218 11:50:39.808144 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:50:39 crc kubenswrapper[4922]: I0218 11:50:39.808749 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:50:39 crc kubenswrapper[4922]: I0218 11:50:39.808801 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:50:39 crc kubenswrapper[4922]: I0218 11:50:39.809484 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ea4c69ba94e3a69c2a9d6932ead886d1aa8f5af4ec72d79e294ae3c3d8f54dd"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 11:50:39 crc kubenswrapper[4922]: I0218 11:50:39.809554 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://0ea4c69ba94e3a69c2a9d6932ead886d1aa8f5af4ec72d79e294ae3c3d8f54dd" gracePeriod=600 Feb 18 11:50:40 crc kubenswrapper[4922]: I0218 11:50:40.451310 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="0ea4c69ba94e3a69c2a9d6932ead886d1aa8f5af4ec72d79e294ae3c3d8f54dd" exitCode=0 Feb 18 11:50:40 crc kubenswrapper[4922]: I0218 11:50:40.451390 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"0ea4c69ba94e3a69c2a9d6932ead886d1aa8f5af4ec72d79e294ae3c3d8f54dd"} Feb 18 11:50:40 crc kubenswrapper[4922]: I0218 11:50:40.451882 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"3ecb5c1316b12312a3aa69dd6c259c33fc3f21c1c782e1e74d55d4b725fb05a8"} Feb 18 11:50:40 crc kubenswrapper[4922]: I0218 11:50:40.451904 4922 scope.go:117] "RemoveContainer" containerID="02db91f6fc3d787614fdd835995da44a7dcbace8afdf71101bf73a1aefb8d53b" Feb 18 11:50:41 crc kubenswrapper[4922]: I0218 11:50:41.585351 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-576949b4c-vwcqv" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.314975 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f"] Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.316578 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.320442 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fwwpd"] Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.322642 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-c2whk" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.323734 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.323733 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.334094 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.334091 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.361975 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f"] Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.464691 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7rvcx"] Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.465810 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.470894 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.471006 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.471216 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2dznj" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.473327 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.482021 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-8ds4f"] Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.483139 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.485326 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.505920 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqqbk\" (UniqueName: \"kubernetes.io/projected/74e84ee6-9d14-48aa-9e59-f1ee46e15fcf-kube-api-access-bqqbk\") pod \"frr-k8s-webhook-server-78b44bf5bb-9cn2f\" (UID: \"74e84ee6-9d14-48aa-9e59-f1ee46e15fcf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.505969 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-metrics\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.506032 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-sockets\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.506125 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzvb\" (UniqueName: \"kubernetes.io/projected/d069bacc-29a2-4aeb-9437-e654621c73c8-kube-api-access-zvzvb\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.506179 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d069bacc-29a2-4aeb-9437-e654621c73c8-metrics-certs\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.506235 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-startup\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.506268 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-reloader\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.506318 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-conf\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.506337 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74e84ee6-9d14-48aa-9e59-f1ee46e15fcf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9cn2f\" (UID: \"74e84ee6-9d14-48aa-9e59-f1ee46e15fcf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.506443 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-8ds4f"] Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607583 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d069bacc-29a2-4aeb-9437-e654621c73c8-metrics-certs\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607633 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-startup\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607668 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-reloader\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607710 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7989\" (UniqueName: \"kubernetes.io/projected/4e80d896-3eb4-4dc8-b217-441a5a09dd05-kube-api-access-t7989\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607736 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-conf\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607762 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74e84ee6-9d14-48aa-9e59-f1ee46e15fcf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9cn2f\" (UID: \"74e84ee6-9d14-48aa-9e59-f1ee46e15fcf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607783 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-metrics-certs\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607817 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-memberlist\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607853 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqqbk\" (UniqueName: \"kubernetes.io/projected/74e84ee6-9d14-48aa-9e59-f1ee46e15fcf-kube-api-access-bqqbk\") pod \"frr-k8s-webhook-server-78b44bf5bb-9cn2f\" (UID: \"74e84ee6-9d14-48aa-9e59-f1ee46e15fcf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607888 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa729491-0a34-4772-8178-d8566c355add-metallb-excludel2\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607914 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-metrics\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607958 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-sockets\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.607983 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e80d896-3eb4-4dc8-b217-441a5a09dd05-cert\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.608010 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxh8k\" (UniqueName: \"kubernetes.io/projected/aa729491-0a34-4772-8178-d8566c355add-kube-api-access-cxh8k\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.608064 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e80d896-3eb4-4dc8-b217-441a5a09dd05-metrics-certs\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.608084 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzvb\" (UniqueName: \"kubernetes.io/projected/d069bacc-29a2-4aeb-9437-e654621c73c8-kube-api-access-zvzvb\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.608583 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-reloader\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.608966 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-startup\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.609090 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-sockets\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.609111 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-frr-conf\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.609328 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d069bacc-29a2-4aeb-9437-e654621c73c8-metrics\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.613921 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d069bacc-29a2-4aeb-9437-e654621c73c8-metrics-certs\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.614088 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/74e84ee6-9d14-48aa-9e59-f1ee46e15fcf-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9cn2f\" (UID: \"74e84ee6-9d14-48aa-9e59-f1ee46e15fcf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.626564 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzvb\" (UniqueName: \"kubernetes.io/projected/d069bacc-29a2-4aeb-9437-e654621c73c8-kube-api-access-zvzvb\") pod \"frr-k8s-fwwpd\" (UID: \"d069bacc-29a2-4aeb-9437-e654621c73c8\") " pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.628219 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqqbk\" (UniqueName: \"kubernetes.io/projected/74e84ee6-9d14-48aa-9e59-f1ee46e15fcf-kube-api-access-bqqbk\") pod \"frr-k8s-webhook-server-78b44bf5bb-9cn2f\" (UID: \"74e84ee6-9d14-48aa-9e59-f1ee46e15fcf\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.640273 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.652731 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.709938 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7989\" (UniqueName: \"kubernetes.io/projected/4e80d896-3eb4-4dc8-b217-441a5a09dd05-kube-api-access-t7989\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.709993 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-metrics-certs\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.710021 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-memberlist\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.710064 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa729491-0a34-4772-8178-d8566c355add-metallb-excludel2\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.710108 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e80d896-3eb4-4dc8-b217-441a5a09dd05-cert\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.710134 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxh8k\" (UniqueName: \"kubernetes.io/projected/aa729491-0a34-4772-8178-d8566c355add-kube-api-access-cxh8k\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.710165 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e80d896-3eb4-4dc8-b217-441a5a09dd05-metrics-certs\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.711114 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/aa729491-0a34-4772-8178-d8566c355add-metallb-excludel2\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: E0218 11:50:42.711234 4922 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 11:50:42 crc kubenswrapper[4922]: E0218 11:50:42.711280 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-memberlist podName:aa729491-0a34-4772-8178-d8566c355add nodeName:}" failed. No retries permitted until 2026-02-18 11:50:43.21126672 +0000 UTC m=+844.938970800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-memberlist") pod "speaker-7rvcx" (UID: "aa729491-0a34-4772-8178-d8566c355add") : secret "metallb-memberlist" not found Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.714669 4922 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.714865 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-metrics-certs\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.715129 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e80d896-3eb4-4dc8-b217-441a5a09dd05-metrics-certs\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.727113 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e80d896-3eb4-4dc8-b217-441a5a09dd05-cert\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.731617 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7989\" (UniqueName: \"kubernetes.io/projected/4e80d896-3eb4-4dc8-b217-441a5a09dd05-kube-api-access-t7989\") pod \"controller-69bbfbf88f-8ds4f\" (UID: \"4e80d896-3eb4-4dc8-b217-441a5a09dd05\") " pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.749639 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxh8k\" (UniqueName: \"kubernetes.io/projected/aa729491-0a34-4772-8178-d8566c355add-kube-api-access-cxh8k\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.802640 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:42 crc kubenswrapper[4922]: I0218 11:50:42.876686 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f"] Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.002729 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-8ds4f"] Feb 18 11:50:43 crc kubenswrapper[4922]: W0218 11:50:43.007165 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e80d896_3eb4_4dc8_b217_441a5a09dd05.slice/crio-9e0ca6b826edd881d338755a10261f41cd0607a7eb79366ee36d376833725a87 WatchSource:0}: Error finding container 9e0ca6b826edd881d338755a10261f41cd0607a7eb79366ee36d376833725a87: Status 404 returned error can't find the container with id 9e0ca6b826edd881d338755a10261f41cd0607a7eb79366ee36d376833725a87 Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.218887 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-memberlist\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.227006 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/aa729491-0a34-4772-8178-d8566c355add-memberlist\") pod \"speaker-7rvcx\" (UID: \"aa729491-0a34-4772-8178-d8566c355add\") " pod="metallb-system/speaker-7rvcx" Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.379421 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7rvcx" Feb 18 11:50:43 crc kubenswrapper[4922]: W0218 11:50:43.401872 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa729491_0a34_4772_8178_d8566c355add.slice/crio-fd6b3bf8d78b636ec606e71a7241b2afc71f99d1f761934d419f6dab90b2bc54 WatchSource:0}: Error finding container fd6b3bf8d78b636ec606e71a7241b2afc71f99d1f761934d419f6dab90b2bc54: Status 404 returned error can't find the container with id fd6b3bf8d78b636ec606e71a7241b2afc71f99d1f761934d419f6dab90b2bc54 Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.498281 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" event={"ID":"74e84ee6-9d14-48aa-9e59-f1ee46e15fcf","Type":"ContainerStarted","Data":"c700c4485c2f30fff8a764e4fe0d5d21ed650a7278e8cc60dfa005229bf71587"} Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.499978 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerStarted","Data":"dd093157a54d99e46ba2c1e7b3479f74534b4f69d660714eb9cb7fab2bee10a3"} Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.502258 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7rvcx" event={"ID":"aa729491-0a34-4772-8178-d8566c355add","Type":"ContainerStarted","Data":"fd6b3bf8d78b636ec606e71a7241b2afc71f99d1f761934d419f6dab90b2bc54"} Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.508975 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8ds4f" event={"ID":"4e80d896-3eb4-4dc8-b217-441a5a09dd05","Type":"ContainerStarted","Data":"c28e7f3963f59d42e3f95d86350d9c59cfb841c6daabe2cf1440919548cf92fa"} Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.509051 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8ds4f" event={"ID":"4e80d896-3eb4-4dc8-b217-441a5a09dd05","Type":"ContainerStarted","Data":"1a095f7b3232e73c8ddc8b63f1c0c79dd1c30f459842e80e5416316587b84041"} Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.509073 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8ds4f" event={"ID":"4e80d896-3eb4-4dc8-b217-441a5a09dd05","Type":"ContainerStarted","Data":"9e0ca6b826edd881d338755a10261f41cd0607a7eb79366ee36d376833725a87"} Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.509220 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:50:43 crc kubenswrapper[4922]: I0218 11:50:43.532741 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-8ds4f" podStartSLOduration=1.532703717 podStartE2EDuration="1.532703717s" podCreationTimestamp="2026-02-18 11:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:50:43.526821683 +0000 UTC m=+845.254525783" watchObservedRunningTime="2026-02-18 11:50:43.532703717 +0000 UTC m=+845.260407797" Feb 18 11:50:44 crc kubenswrapper[4922]: I0218 11:50:44.517306 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7rvcx" event={"ID":"aa729491-0a34-4772-8178-d8566c355add","Type":"ContainerStarted","Data":"80cbd1fd2ea9db1f07cfe08607f0bf5fbb3d830bda781f3858b1f3689337c2b7"} Feb 18 11:50:44 crc kubenswrapper[4922]: I0218 11:50:44.518565 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7rvcx" event={"ID":"aa729491-0a34-4772-8178-d8566c355add","Type":"ContainerStarted","Data":"f59d365f37d4f76f87e5b85c0896de123746606dd89d25dbb109c0fedcde263c"} Feb 18 11:50:44 crc kubenswrapper[4922]: I0218 11:50:44.518605 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7rvcx" Feb 18 11:50:44 crc kubenswrapper[4922]: I0218 11:50:44.538638 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7rvcx" podStartSLOduration=2.538608174 podStartE2EDuration="2.538608174s" podCreationTimestamp="2026-02-18 11:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:50:44.53763049 +0000 UTC m=+846.265334570" watchObservedRunningTime="2026-02-18 11:50:44.538608174 +0000 UTC m=+846.266312254" Feb 18 11:50:50 crc kubenswrapper[4922]: I0218 11:50:50.567042 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" event={"ID":"74e84ee6-9d14-48aa-9e59-f1ee46e15fcf","Type":"ContainerStarted","Data":"ba2534187ceeb73d0d90dddec4dd035aff8b92f51719eadcfd2a3263ab03a830"} Feb 18 11:50:50 crc kubenswrapper[4922]: I0218 11:50:50.567650 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:50:50 crc kubenswrapper[4922]: I0218 11:50:50.569108 4922 generic.go:334] "Generic (PLEG): container finished" podID="d069bacc-29a2-4aeb-9437-e654621c73c8" containerID="dd2c12a86f634e49366c0496a7fa0b1cb6cdd22fb0ca2230ef33223ed8465cd5" exitCode=0 Feb 18 11:50:50 crc kubenswrapper[4922]: I0218 11:50:50.569149 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerDied","Data":"dd2c12a86f634e49366c0496a7fa0b1cb6cdd22fb0ca2230ef33223ed8465cd5"} Feb 18 11:50:50 crc kubenswrapper[4922]: I0218 11:50:50.591049 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" podStartSLOduration=1.396541122 podStartE2EDuration="8.591031148s" podCreationTimestamp="2026-02-18 11:50:42 +0000 UTC" firstStartedPulling="2026-02-18 11:50:42.88631476 +0000 UTC m=+844.614018840" lastFinishedPulling="2026-02-18 11:50:50.080804776 +0000 UTC m=+851.808508866" observedRunningTime="2026-02-18 11:50:50.587715117 +0000 UTC m=+852.315419197" watchObservedRunningTime="2026-02-18 11:50:50.591031148 +0000 UTC m=+852.318735228" Feb 18 11:50:51 crc kubenswrapper[4922]: I0218 11:50:51.577968 4922 generic.go:334] "Generic (PLEG): container finished" podID="d069bacc-29a2-4aeb-9437-e654621c73c8" containerID="6c84959e9441ad2a088e1d84890e555a1afcc5db5a316080377dfcaef787553d" exitCode=0 Feb 18 11:50:51 crc kubenswrapper[4922]: I0218 11:50:51.578040 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerDied","Data":"6c84959e9441ad2a088e1d84890e555a1afcc5db5a316080377dfcaef787553d"} Feb 18 11:50:52 crc kubenswrapper[4922]: I0218 11:50:52.586430 4922 generic.go:334] "Generic (PLEG): container finished" podID="d069bacc-29a2-4aeb-9437-e654621c73c8" containerID="21966ed5eca5df8b6a2c98f7afcf86a7948ab14135f37cac606291a98f8e4595" exitCode=0 Feb 18 11:50:52 crc kubenswrapper[4922]: I0218 11:50:52.586473 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerDied","Data":"21966ed5eca5df8b6a2c98f7afcf86a7948ab14135f37cac606291a98f8e4595"} Feb 18 11:50:53 crc kubenswrapper[4922]: I0218 11:50:53.382146 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7rvcx" Feb 18 11:50:53 crc kubenswrapper[4922]: I0218 11:50:53.604058 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerStarted","Data":"cccacbb85d963e92575c7289808168a688091dbb6f20629a1bb6201480f3feb3"} Feb 18 11:50:53 crc kubenswrapper[4922]: I0218 11:50:53.604121 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerStarted","Data":"473686045b41915ce5b680985e16192d4c921709f6508bdb13249c8baea3a22b"} Feb 18 11:50:53 crc kubenswrapper[4922]: I0218 11:50:53.604131 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerStarted","Data":"193aab8d8ca9a911883eddc72edcc8981c1085200f0a894cb79d7596a3fab13d"} Feb 18 11:50:53 crc kubenswrapper[4922]: I0218 11:50:53.604139 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerStarted","Data":"a58ea9fc19026d27e1bb4484655923385bd5f4ea4ac7e9897e8d634315786c2f"} Feb 18 11:50:53 crc kubenswrapper[4922]: I0218 11:50:53.604164 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerStarted","Data":"dcd9db767feb7caafeeb4bd85518538028ae08c124f64fd727f6812221d071e4"} Feb 18 11:50:54 crc kubenswrapper[4922]: I0218 11:50:54.616130 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fwwpd" event={"ID":"d069bacc-29a2-4aeb-9437-e654621c73c8","Type":"ContainerStarted","Data":"5fc9d22b47ab0d541e8edfc67262724185c66fdb775d5c51055e2154bb16f472"} Feb 18 11:50:54 crc kubenswrapper[4922]: I0218 11:50:54.617116 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:54 crc kubenswrapper[4922]: I0218 11:50:54.638133 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fwwpd" podStartSLOduration=5.382892618 podStartE2EDuration="12.638115425s" podCreationTimestamp="2026-02-18 11:50:42 +0000 UTC" firstStartedPulling="2026-02-18 11:50:42.83541631 +0000 UTC m=+844.563120390" lastFinishedPulling="2026-02-18 11:50:50.090639117 +0000 UTC m=+851.818343197" observedRunningTime="2026-02-18 11:50:54.636232329 +0000 UTC m=+856.363936409" watchObservedRunningTime="2026-02-18 11:50:54.638115425 +0000 UTC m=+856.365819505" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.173100 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-h4fcj"] Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.174154 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h4fcj" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.179796 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8m47b" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.179856 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.179805 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.186338 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h4fcj"] Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.317407 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmjnp\" (UniqueName: \"kubernetes.io/projected/987d782f-67f5-4ce7-bd98-cea59f177e8d-kube-api-access-mmjnp\") pod \"openstack-operator-index-h4fcj\" (UID: \"987d782f-67f5-4ce7-bd98-cea59f177e8d\") " pod="openstack-operators/openstack-operator-index-h4fcj" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.419223 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmjnp\" (UniqueName: \"kubernetes.io/projected/987d782f-67f5-4ce7-bd98-cea59f177e8d-kube-api-access-mmjnp\") pod \"openstack-operator-index-h4fcj\" (UID: \"987d782f-67f5-4ce7-bd98-cea59f177e8d\") " pod="openstack-operators/openstack-operator-index-h4fcj" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.437743 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmjnp\" (UniqueName: \"kubernetes.io/projected/987d782f-67f5-4ce7-bd98-cea59f177e8d-kube-api-access-mmjnp\") pod \"openstack-operator-index-h4fcj\" (UID: \"987d782f-67f5-4ce7-bd98-cea59f177e8d\") " pod="openstack-operators/openstack-operator-index-h4fcj" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.552575 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h4fcj" Feb 18 11:50:56 crc kubenswrapper[4922]: I0218 11:50:56.955278 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h4fcj"] Feb 18 11:50:56 crc kubenswrapper[4922]: W0218 11:50:56.967407 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod987d782f_67f5_4ce7_bd98_cea59f177e8d.slice/crio-220b34ee83ba684aad00844ee0d2258af0223cfccb2c56b89dd704093a1ea2f0 WatchSource:0}: Error finding container 220b34ee83ba684aad00844ee0d2258af0223cfccb2c56b89dd704093a1ea2f0: Status 404 returned error can't find the container with id 220b34ee83ba684aad00844ee0d2258af0223cfccb2c56b89dd704093a1ea2f0 Feb 18 11:50:57 crc kubenswrapper[4922]: I0218 11:50:57.640847 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h4fcj" event={"ID":"987d782f-67f5-4ce7-bd98-cea59f177e8d","Type":"ContainerStarted","Data":"220b34ee83ba684aad00844ee0d2258af0223cfccb2c56b89dd704093a1ea2f0"} Feb 18 11:50:57 crc kubenswrapper[4922]: I0218 11:50:57.653288 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:57 crc kubenswrapper[4922]: I0218 11:50:57.696135 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:50:59 crc kubenswrapper[4922]: I0218 11:50:59.514029 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-h4fcj"] Feb 18 11:51:00 crc kubenswrapper[4922]: I0218 11:51:00.126250 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8rrxt"] Feb 18 11:51:00 crc kubenswrapper[4922]: I0218 11:51:00.128382 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:00 crc kubenswrapper[4922]: I0218 11:51:00.134087 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8rrxt"] Feb 18 11:51:00 crc kubenswrapper[4922]: I0218 11:51:00.273214 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwsg\" (UniqueName: \"kubernetes.io/projected/191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a-kube-api-access-rnwsg\") pod \"openstack-operator-index-8rrxt\" (UID: \"191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a\") " pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:00 crc kubenswrapper[4922]: I0218 11:51:00.373994 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnwsg\" (UniqueName: \"kubernetes.io/projected/191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a-kube-api-access-rnwsg\") pod \"openstack-operator-index-8rrxt\" (UID: \"191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a\") " pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:00 crc kubenswrapper[4922]: I0218 11:51:00.396471 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnwsg\" (UniqueName: \"kubernetes.io/projected/191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a-kube-api-access-rnwsg\") pod \"openstack-operator-index-8rrxt\" (UID: \"191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a\") " pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:00 crc kubenswrapper[4922]: I0218 11:51:00.448178 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:01 crc kubenswrapper[4922]: I0218 11:51:01.167253 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8rrxt"] Feb 18 11:51:01 crc kubenswrapper[4922]: W0218 11:51:01.174138 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod191b8ec5_c4e8_4e8c_92c2_fa2fd655f94a.slice/crio-35ff7845aa20abd551265e8f928cb4147f448a43d60941fcbec09c09708a6618 WatchSource:0}: Error finding container 35ff7845aa20abd551265e8f928cb4147f448a43d60941fcbec09c09708a6618: Status 404 returned error can't find the container with id 35ff7845aa20abd551265e8f928cb4147f448a43d60941fcbec09c09708a6618 Feb 18 11:51:01 crc kubenswrapper[4922]: I0218 11:51:01.667200 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8rrxt" event={"ID":"191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a","Type":"ContainerStarted","Data":"8e7386d473cb8963d0167a473abef9786a4348b15950a6f1efb44730d0c1bb6a"} Feb 18 11:51:01 crc kubenswrapper[4922]: I0218 11:51:01.667262 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8rrxt" event={"ID":"191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a","Type":"ContainerStarted","Data":"35ff7845aa20abd551265e8f928cb4147f448a43d60941fcbec09c09708a6618"} Feb 18 11:51:01 crc kubenswrapper[4922]: I0218 11:51:01.669851 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h4fcj" event={"ID":"987d782f-67f5-4ce7-bd98-cea59f177e8d","Type":"ContainerStarted","Data":"1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615"} Feb 18 11:51:01 crc kubenswrapper[4922]: I0218 11:51:01.669989 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-h4fcj" podUID="987d782f-67f5-4ce7-bd98-cea59f177e8d" containerName="registry-server" containerID="cri-o://1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615" gracePeriod=2 Feb 18 11:51:01 crc kubenswrapper[4922]: I0218 11:51:01.688172 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8rrxt" podStartSLOduration=1.614246297 podStartE2EDuration="1.688153943s" podCreationTimestamp="2026-02-18 11:51:00 +0000 UTC" firstStartedPulling="2026-02-18 11:51:01.180261147 +0000 UTC m=+862.907965217" lastFinishedPulling="2026-02-18 11:51:01.254168783 +0000 UTC m=+862.981872863" observedRunningTime="2026-02-18 11:51:01.687929787 +0000 UTC m=+863.415633867" watchObservedRunningTime="2026-02-18 11:51:01.688153943 +0000 UTC m=+863.415858023" Feb 18 11:51:01 crc kubenswrapper[4922]: I0218 11:51:01.705735 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-h4fcj" podStartSLOduration=1.639864966 podStartE2EDuration="5.705713634s" podCreationTimestamp="2026-02-18 11:50:56 +0000 UTC" firstStartedPulling="2026-02-18 11:50:56.970022713 +0000 UTC m=+858.697726793" lastFinishedPulling="2026-02-18 11:51:01.035871371 +0000 UTC m=+862.763575461" observedRunningTime="2026-02-18 11:51:01.701779547 +0000 UTC m=+863.429483667" watchObservedRunningTime="2026-02-18 11:51:01.705713634 +0000 UTC m=+863.433417714" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.033903 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h4fcj" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.198160 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmjnp\" (UniqueName: \"kubernetes.io/projected/987d782f-67f5-4ce7-bd98-cea59f177e8d-kube-api-access-mmjnp\") pod \"987d782f-67f5-4ce7-bd98-cea59f177e8d\" (UID: \"987d782f-67f5-4ce7-bd98-cea59f177e8d\") " Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.205011 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987d782f-67f5-4ce7-bd98-cea59f177e8d-kube-api-access-mmjnp" (OuterVolumeSpecName: "kube-api-access-mmjnp") pod "987d782f-67f5-4ce7-bd98-cea59f177e8d" (UID: "987d782f-67f5-4ce7-bd98-cea59f177e8d"). InnerVolumeSpecName "kube-api-access-mmjnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.299975 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmjnp\" (UniqueName: \"kubernetes.io/projected/987d782f-67f5-4ce7-bd98-cea59f177e8d-kube-api-access-mmjnp\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.648246 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9cn2f" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.656133 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fwwpd" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.682401 4922 generic.go:334] "Generic (PLEG): container finished" podID="987d782f-67f5-4ce7-bd98-cea59f177e8d" containerID="1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615" exitCode=0 Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.683035 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h4fcj" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.693057 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h4fcj" event={"ID":"987d782f-67f5-4ce7-bd98-cea59f177e8d","Type":"ContainerDied","Data":"1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615"} Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.693107 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h4fcj" event={"ID":"987d782f-67f5-4ce7-bd98-cea59f177e8d","Type":"ContainerDied","Data":"220b34ee83ba684aad00844ee0d2258af0223cfccb2c56b89dd704093a1ea2f0"} Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.693153 4922 scope.go:117] "RemoveContainer" containerID="1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.721312 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-h4fcj"] Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.726147 4922 scope.go:117] "RemoveContainer" containerID="1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615" Feb 18 11:51:02 crc kubenswrapper[4922]: E0218 11:51:02.726586 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615\": container with ID starting with 1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615 not found: ID does not exist" containerID="1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.726640 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615"} err="failed to get container status \"1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615\": rpc error: code = NotFound desc = could not find container \"1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615\": container with ID starting with 1c29f0c6da2f82b64ece39e9fb4215cb1c2eb6d0c555e56f73f056d501dd0615 not found: ID does not exist" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.728694 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-h4fcj"] Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.808513 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-8ds4f" Feb 18 11:51:02 crc kubenswrapper[4922]: I0218 11:51:02.983574 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="987d782f-67f5-4ce7-bd98-cea59f177e8d" path="/var/lib/kubelet/pods/987d782f-67f5-4ce7-bd98-cea59f177e8d/volumes" Feb 18 11:51:10 crc kubenswrapper[4922]: I0218 11:51:10.449756 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:10 crc kubenswrapper[4922]: I0218 11:51:10.450258 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:10 crc kubenswrapper[4922]: I0218 11:51:10.477815 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:10 crc kubenswrapper[4922]: I0218 11:51:10.776692 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8rrxt" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.758035 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv"] Feb 18 11:51:11 crc kubenswrapper[4922]: E0218 11:51:11.758270 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="987d782f-67f5-4ce7-bd98-cea59f177e8d" containerName="registry-server" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.758282 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="987d782f-67f5-4ce7-bd98-cea59f177e8d" containerName="registry-server" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.758425 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="987d782f-67f5-4ce7-bd98-cea59f177e8d" containerName="registry-server" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.759277 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.764149 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-lqlmg" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.768823 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv"] Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.843876 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-util\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.843931 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zrj\" (UniqueName: \"kubernetes.io/projected/83694df8-b6fe-4913-8f73-d53972c81f36-kube-api-access-t2zrj\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.843967 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-bundle\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.944790 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-util\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.944848 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2zrj\" (UniqueName: \"kubernetes.io/projected/83694df8-b6fe-4913-8f73-d53972c81f36-kube-api-access-t2zrj\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.944882 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-bundle\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.945503 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-bundle\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.945706 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-util\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:11 crc kubenswrapper[4922]: I0218 11:51:11.971746 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2zrj\" (UniqueName: \"kubernetes.io/projected/83694df8-b6fe-4913-8f73-d53972c81f36-kube-api-access-t2zrj\") pod \"14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:12 crc kubenswrapper[4922]: I0218 11:51:12.081801 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:12 crc kubenswrapper[4922]: I0218 11:51:12.485728 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv"] Feb 18 11:51:12 crc kubenswrapper[4922]: W0218 11:51:12.494379 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83694df8_b6fe_4913_8f73_d53972c81f36.slice/crio-4350210d6e70cfdde3c8b3dc3c6842679a33148f8e693a5bd1aabc74d003ccc7 WatchSource:0}: Error finding container 4350210d6e70cfdde3c8b3dc3c6842679a33148f8e693a5bd1aabc74d003ccc7: Status 404 returned error can't find the container with id 4350210d6e70cfdde3c8b3dc3c6842679a33148f8e693a5bd1aabc74d003ccc7 Feb 18 11:51:12 crc kubenswrapper[4922]: I0218 11:51:12.750577 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" event={"ID":"83694df8-b6fe-4913-8f73-d53972c81f36","Type":"ContainerStarted","Data":"261ae208f78558fd60501df0dcebd0c60e4b4e40aab5ca673884561de1f961e5"} Feb 18 11:51:12 crc kubenswrapper[4922]: I0218 11:51:12.750867 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" event={"ID":"83694df8-b6fe-4913-8f73-d53972c81f36","Type":"ContainerStarted","Data":"4350210d6e70cfdde3c8b3dc3c6842679a33148f8e693a5bd1aabc74d003ccc7"} Feb 18 11:51:13 crc kubenswrapper[4922]: I0218 11:51:13.759529 4922 generic.go:334] "Generic (PLEG): container finished" podID="83694df8-b6fe-4913-8f73-d53972c81f36" containerID="261ae208f78558fd60501df0dcebd0c60e4b4e40aab5ca673884561de1f961e5" exitCode=0 Feb 18 11:51:13 crc kubenswrapper[4922]: I0218 11:51:13.759591 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" event={"ID":"83694df8-b6fe-4913-8f73-d53972c81f36","Type":"ContainerDied","Data":"261ae208f78558fd60501df0dcebd0c60e4b4e40aab5ca673884561de1f961e5"} Feb 18 11:51:14 crc kubenswrapper[4922]: I0218 11:51:14.768454 4922 generic.go:334] "Generic (PLEG): container finished" podID="83694df8-b6fe-4913-8f73-d53972c81f36" containerID="92fb3b0912ddc49b82bc590ae3ba094326610c0781c4874e5c5df7beeeb18ae4" exitCode=0 Feb 18 11:51:14 crc kubenswrapper[4922]: I0218 11:51:14.768733 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" event={"ID":"83694df8-b6fe-4913-8f73-d53972c81f36","Type":"ContainerDied","Data":"92fb3b0912ddc49b82bc590ae3ba094326610c0781c4874e5c5df7beeeb18ae4"} Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.334174 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j769b"] Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.337666 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.346797 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j769b"] Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.491300 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-catalog-content\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.491406 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67mpt\" (UniqueName: \"kubernetes.io/projected/c0a7f927-eadd-4bed-85f2-4347306f598f-kube-api-access-67mpt\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.491431 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-utilities\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.592532 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-catalog-content\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.592617 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67mpt\" (UniqueName: \"kubernetes.io/projected/c0a7f927-eadd-4bed-85f2-4347306f598f-kube-api-access-67mpt\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.592648 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-utilities\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.593073 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-catalog-content\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.593104 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-utilities\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.612470 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67mpt\" (UniqueName: \"kubernetes.io/projected/c0a7f927-eadd-4bed-85f2-4347306f598f-kube-api-access-67mpt\") pod \"community-operators-j769b\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.657446 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.826137 4922 generic.go:334] "Generic (PLEG): container finished" podID="83694df8-b6fe-4913-8f73-d53972c81f36" containerID="586d9d63f8b1efc80891299a636b0ac6ecb9eacdf61b31838f42df503a796f5b" exitCode=0 Feb 18 11:51:15 crc kubenswrapper[4922]: I0218 11:51:15.827355 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" event={"ID":"83694df8-b6fe-4913-8f73-d53972c81f36","Type":"ContainerDied","Data":"586d9d63f8b1efc80891299a636b0ac6ecb9eacdf61b31838f42df503a796f5b"} Feb 18 11:51:16 crc kubenswrapper[4922]: I0218 11:51:16.152848 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j769b"] Feb 18 11:51:16 crc kubenswrapper[4922]: I0218 11:51:16.833941 4922 generic.go:334] "Generic (PLEG): container finished" podID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerID="d47389a21468630e8d39ae2bd35959f5913e72d9622874ba171d46cf996381ed" exitCode=0 Feb 18 11:51:16 crc kubenswrapper[4922]: I0218 11:51:16.834012 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j769b" event={"ID":"c0a7f927-eadd-4bed-85f2-4347306f598f","Type":"ContainerDied","Data":"d47389a21468630e8d39ae2bd35959f5913e72d9622874ba171d46cf996381ed"} Feb 18 11:51:16 crc kubenswrapper[4922]: I0218 11:51:16.834588 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j769b" event={"ID":"c0a7f927-eadd-4bed-85f2-4347306f598f","Type":"ContainerStarted","Data":"5831f18aa88a23531ab09da14d84f9930cfd9b5366af084b1e7687e0f60fab18"} Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.108665 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.220271 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-util\") pod \"83694df8-b6fe-4913-8f73-d53972c81f36\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.220354 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2zrj\" (UniqueName: \"kubernetes.io/projected/83694df8-b6fe-4913-8f73-d53972c81f36-kube-api-access-t2zrj\") pod \"83694df8-b6fe-4913-8f73-d53972c81f36\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.220423 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-bundle\") pod \"83694df8-b6fe-4913-8f73-d53972c81f36\" (UID: \"83694df8-b6fe-4913-8f73-d53972c81f36\") " Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.221412 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-bundle" (OuterVolumeSpecName: "bundle") pod "83694df8-b6fe-4913-8f73-d53972c81f36" (UID: "83694df8-b6fe-4913-8f73-d53972c81f36"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.227902 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83694df8-b6fe-4913-8f73-d53972c81f36-kube-api-access-t2zrj" (OuterVolumeSpecName: "kube-api-access-t2zrj") pod "83694df8-b6fe-4913-8f73-d53972c81f36" (UID: "83694df8-b6fe-4913-8f73-d53972c81f36"). InnerVolumeSpecName "kube-api-access-t2zrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.235589 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-util" (OuterVolumeSpecName: "util") pod "83694df8-b6fe-4913-8f73-d53972c81f36" (UID: "83694df8-b6fe-4913-8f73-d53972c81f36"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.321632 4922 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-util\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.321671 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2zrj\" (UniqueName: \"kubernetes.io/projected/83694df8-b6fe-4913-8f73-d53972c81f36-kube-api-access-t2zrj\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.321682 4922 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/83694df8-b6fe-4913-8f73-d53972c81f36-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.846307 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" event={"ID":"83694df8-b6fe-4913-8f73-d53972c81f36","Type":"ContainerDied","Data":"4350210d6e70cfdde3c8b3dc3c6842679a33148f8e693a5bd1aabc74d003ccc7"} Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.846410 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4350210d6e70cfdde3c8b3dc3c6842679a33148f8e693a5bd1aabc74d003ccc7" Feb 18 11:51:17 crc kubenswrapper[4922]: I0218 11:51:17.846563 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv" Feb 18 11:51:18 crc kubenswrapper[4922]: I0218 11:51:18.860340 4922 generic.go:334] "Generic (PLEG): container finished" podID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerID="e109ea9bbc68468ccc7af684b8cf48822d1125374586c5006fdd34b142717ecb" exitCode=0 Feb 18 11:51:18 crc kubenswrapper[4922]: I0218 11:51:18.860559 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j769b" event={"ID":"c0a7f927-eadd-4bed-85f2-4347306f598f","Type":"ContainerDied","Data":"e109ea9bbc68468ccc7af684b8cf48822d1125374586c5006fdd34b142717ecb"} Feb 18 11:51:19 crc kubenswrapper[4922]: I0218 11:51:19.870670 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j769b" event={"ID":"c0a7f927-eadd-4bed-85f2-4347306f598f","Type":"ContainerStarted","Data":"7457c3c4cfe051c442809f502b83c3ceabdeb0e47c056e1b34acbb2533dd1165"} Feb 18 11:51:19 crc kubenswrapper[4922]: I0218 11:51:19.892100 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j769b" podStartSLOduration=2.460582113 podStartE2EDuration="4.892085732s" podCreationTimestamp="2026-02-18 11:51:15 +0000 UTC" firstStartedPulling="2026-02-18 11:51:16.836542523 +0000 UTC m=+878.564246603" lastFinishedPulling="2026-02-18 11:51:19.268046102 +0000 UTC m=+880.995750222" observedRunningTime="2026-02-18 11:51:19.887178888 +0000 UTC m=+881.614882968" watchObservedRunningTime="2026-02-18 11:51:19.892085732 +0000 UTC m=+881.619789812" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.672576 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v"] Feb 18 11:51:21 crc kubenswrapper[4922]: E0218 11:51:21.673259 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83694df8-b6fe-4913-8f73-d53972c81f36" containerName="pull" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.673270 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="83694df8-b6fe-4913-8f73-d53972c81f36" containerName="pull" Feb 18 11:51:21 crc kubenswrapper[4922]: E0218 11:51:21.673291 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83694df8-b6fe-4913-8f73-d53972c81f36" containerName="extract" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.673298 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="83694df8-b6fe-4913-8f73-d53972c81f36" containerName="extract" Feb 18 11:51:21 crc kubenswrapper[4922]: E0218 11:51:21.673308 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83694df8-b6fe-4913-8f73-d53972c81f36" containerName="util" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.673313 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="83694df8-b6fe-4913-8f73-d53972c81f36" containerName="util" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.673442 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="83694df8-b6fe-4913-8f73-d53972c81f36" containerName="extract" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.673852 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.675998 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-dst44" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.697540 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v"] Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.783094 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k25d\" (UniqueName: \"kubernetes.io/projected/51a617b6-1c84-446a-a342-bd0687227c0c-kube-api-access-5k25d\") pod \"openstack-operator-controller-init-f8b4c896c-mdz6v\" (UID: \"51a617b6-1c84-446a-a342-bd0687227c0c\") " pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.885564 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k25d\" (UniqueName: \"kubernetes.io/projected/51a617b6-1c84-446a-a342-bd0687227c0c-kube-api-access-5k25d\") pod \"openstack-operator-controller-init-f8b4c896c-mdz6v\" (UID: \"51a617b6-1c84-446a-a342-bd0687227c0c\") " pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.911742 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k25d\" (UniqueName: \"kubernetes.io/projected/51a617b6-1c84-446a-a342-bd0687227c0c-kube-api-access-5k25d\") pod \"openstack-operator-controller-init-f8b4c896c-mdz6v\" (UID: \"51a617b6-1c84-446a-a342-bd0687227c0c\") " pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" Feb 18 11:51:21 crc kubenswrapper[4922]: I0218 11:51:21.993281 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" Feb 18 11:51:22 crc kubenswrapper[4922]: I0218 11:51:22.430070 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v"] Feb 18 11:51:22 crc kubenswrapper[4922]: W0218 11:51:22.441238 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51a617b6_1c84_446a_a342_bd0687227c0c.slice/crio-4df226afa3df4722740fefe68da25a4b751a0d880bf7841972b9a74778777a24 WatchSource:0}: Error finding container 4df226afa3df4722740fefe68da25a4b751a0d880bf7841972b9a74778777a24: Status 404 returned error can't find the container with id 4df226afa3df4722740fefe68da25a4b751a0d880bf7841972b9a74778777a24 Feb 18 11:51:22 crc kubenswrapper[4922]: I0218 11:51:22.896460 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" event={"ID":"51a617b6-1c84-446a-a342-bd0687227c0c","Type":"ContainerStarted","Data":"4df226afa3df4722740fefe68da25a4b751a0d880bf7841972b9a74778777a24"} Feb 18 11:51:25 crc kubenswrapper[4922]: I0218 11:51:25.658033 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:25 crc kubenswrapper[4922]: I0218 11:51:25.658295 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:25 crc kubenswrapper[4922]: I0218 11:51:25.714377 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:25 crc kubenswrapper[4922]: I0218 11:51:25.960687 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:26 crc kubenswrapper[4922]: I0218 11:51:26.924899 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" event={"ID":"51a617b6-1c84-446a-a342-bd0687227c0c","Type":"ContainerStarted","Data":"b0906d449d442e9dff94a969db860fb792187b709eadf0bc697c597c37a9c9c2"} Feb 18 11:51:26 crc kubenswrapper[4922]: I0218 11:51:26.925349 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" Feb 18 11:51:26 crc kubenswrapper[4922]: I0218 11:51:26.960567 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" podStartSLOduration=1.650565128 podStartE2EDuration="5.960552076s" podCreationTimestamp="2026-02-18 11:51:21 +0000 UTC" firstStartedPulling="2026-02-18 11:51:22.447113016 +0000 UTC m=+884.174817096" lastFinishedPulling="2026-02-18 11:51:26.757099964 +0000 UTC m=+888.484804044" observedRunningTime="2026-02-18 11:51:26.956178485 +0000 UTC m=+888.683882585" watchObservedRunningTime="2026-02-18 11:51:26.960552076 +0000 UTC m=+888.688256156" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.517556 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bk7cv"] Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.518964 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.527584 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bk7cv"] Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.685384 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-utilities\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.685534 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sn6t\" (UniqueName: \"kubernetes.io/projected/53648a02-b284-431d-8ad7-11d9633b0149-kube-api-access-6sn6t\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.685561 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-catalog-content\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.787077 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sn6t\" (UniqueName: \"kubernetes.io/projected/53648a02-b284-431d-8ad7-11d9633b0149-kube-api-access-6sn6t\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.787135 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-catalog-content\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.787197 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-utilities\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.787792 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-utilities\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.787868 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-catalog-content\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.811598 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sn6t\" (UniqueName: \"kubernetes.io/projected/53648a02-b284-431d-8ad7-11d9633b0149-kube-api-access-6sn6t\") pod \"certified-operators-bk7cv\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:28 crc kubenswrapper[4922]: I0218 11:51:28.836525 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:29 crc kubenswrapper[4922]: I0218 11:51:29.094579 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bk7cv"] Feb 18 11:51:29 crc kubenswrapper[4922]: W0218 11:51:29.101244 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53648a02_b284_431d_8ad7_11d9633b0149.slice/crio-50d1a8b8ee20e9292f4d6b1978fd5e8273fa913669322f1ad46ae1142887a476 WatchSource:0}: Error finding container 50d1a8b8ee20e9292f4d6b1978fd5e8273fa913669322f1ad46ae1142887a476: Status 404 returned error can't find the container with id 50d1a8b8ee20e9292f4d6b1978fd5e8273fa913669322f1ad46ae1142887a476 Feb 18 11:51:29 crc kubenswrapper[4922]: I0218 11:51:29.313010 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j769b"] Feb 18 11:51:29 crc kubenswrapper[4922]: I0218 11:51:29.313284 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j769b" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerName="registry-server" containerID="cri-o://7457c3c4cfe051c442809f502b83c3ceabdeb0e47c056e1b34acbb2533dd1165" gracePeriod=2 Feb 18 11:51:29 crc kubenswrapper[4922]: I0218 11:51:29.957175 4922 generic.go:334] "Generic (PLEG): container finished" podID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerID="7457c3c4cfe051c442809f502b83c3ceabdeb0e47c056e1b34acbb2533dd1165" exitCode=0 Feb 18 11:51:29 crc kubenswrapper[4922]: I0218 11:51:29.957273 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j769b" event={"ID":"c0a7f927-eadd-4bed-85f2-4347306f598f","Type":"ContainerDied","Data":"7457c3c4cfe051c442809f502b83c3ceabdeb0e47c056e1b34acbb2533dd1165"} Feb 18 11:51:29 crc kubenswrapper[4922]: I0218 11:51:29.958522 4922 generic.go:334] "Generic (PLEG): container finished" podID="53648a02-b284-431d-8ad7-11d9633b0149" containerID="c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4" exitCode=0 Feb 18 11:51:29 crc kubenswrapper[4922]: I0218 11:51:29.958551 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk7cv" event={"ID":"53648a02-b284-431d-8ad7-11d9633b0149","Type":"ContainerDied","Data":"c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4"} Feb 18 11:51:29 crc kubenswrapper[4922]: I0218 11:51:29.958576 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk7cv" event={"ID":"53648a02-b284-431d-8ad7-11d9633b0149","Type":"ContainerStarted","Data":"50d1a8b8ee20e9292f4d6b1978fd5e8273fa913669322f1ad46ae1142887a476"} Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.287962 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.408581 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-utilities\") pod \"c0a7f927-eadd-4bed-85f2-4347306f598f\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.408771 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67mpt\" (UniqueName: \"kubernetes.io/projected/c0a7f927-eadd-4bed-85f2-4347306f598f-kube-api-access-67mpt\") pod \"c0a7f927-eadd-4bed-85f2-4347306f598f\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.408832 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-catalog-content\") pod \"c0a7f927-eadd-4bed-85f2-4347306f598f\" (UID: \"c0a7f927-eadd-4bed-85f2-4347306f598f\") " Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.409443 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-utilities" (OuterVolumeSpecName: "utilities") pod "c0a7f927-eadd-4bed-85f2-4347306f598f" (UID: "c0a7f927-eadd-4bed-85f2-4347306f598f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.415202 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a7f927-eadd-4bed-85f2-4347306f598f-kube-api-access-67mpt" (OuterVolumeSpecName: "kube-api-access-67mpt") pod "c0a7f927-eadd-4bed-85f2-4347306f598f" (UID: "c0a7f927-eadd-4bed-85f2-4347306f598f"). InnerVolumeSpecName "kube-api-access-67mpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.477003 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0a7f927-eadd-4bed-85f2-4347306f598f" (UID: "c0a7f927-eadd-4bed-85f2-4347306f598f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.511027 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67mpt\" (UniqueName: \"kubernetes.io/projected/c0a7f927-eadd-4bed-85f2-4347306f598f-kube-api-access-67mpt\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.511076 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.511085 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0a7f927-eadd-4bed-85f2-4347306f598f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.970032 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j769b" event={"ID":"c0a7f927-eadd-4bed-85f2-4347306f598f","Type":"ContainerDied","Data":"5831f18aa88a23531ab09da14d84f9930cfd9b5366af084b1e7687e0f60fab18"} Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.970093 4922 scope.go:117] "RemoveContainer" containerID="7457c3c4cfe051c442809f502b83c3ceabdeb0e47c056e1b34acbb2533dd1165" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.970217 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j769b" Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.985990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk7cv" event={"ID":"53648a02-b284-431d-8ad7-11d9633b0149","Type":"ContainerStarted","Data":"ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9"} Feb 18 11:51:30 crc kubenswrapper[4922]: I0218 11:51:30.997320 4922 scope.go:117] "RemoveContainer" containerID="e109ea9bbc68468ccc7af684b8cf48822d1125374586c5006fdd34b142717ecb" Feb 18 11:51:31 crc kubenswrapper[4922]: I0218 11:51:31.027771 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j769b"] Feb 18 11:51:31 crc kubenswrapper[4922]: I0218 11:51:31.035130 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j769b"] Feb 18 11:51:31 crc kubenswrapper[4922]: I0218 11:51:31.035171 4922 scope.go:117] "RemoveContainer" containerID="d47389a21468630e8d39ae2bd35959f5913e72d9622874ba171d46cf996381ed" Feb 18 11:51:31 crc kubenswrapper[4922]: I0218 11:51:31.990973 4922 generic.go:334] "Generic (PLEG): container finished" podID="53648a02-b284-431d-8ad7-11d9633b0149" containerID="ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9" exitCode=0 Feb 18 11:51:31 crc kubenswrapper[4922]: I0218 11:51:31.991088 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk7cv" event={"ID":"53648a02-b284-431d-8ad7-11d9633b0149","Type":"ContainerDied","Data":"ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9"} Feb 18 11:51:31 crc kubenswrapper[4922]: I0218 11:51:31.998292 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f8b4c896c-mdz6v" Feb 18 11:51:32 crc kubenswrapper[4922]: I0218 11:51:32.927838 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p58lq"] Feb 18 11:51:32 crc kubenswrapper[4922]: E0218 11:51:32.928868 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerName="extract-utilities" Feb 18 11:51:32 crc kubenswrapper[4922]: I0218 11:51:32.928971 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerName="extract-utilities" Feb 18 11:51:32 crc kubenswrapper[4922]: E0218 11:51:32.929062 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerName="registry-server" Feb 18 11:51:32 crc kubenswrapper[4922]: I0218 11:51:32.929145 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerName="registry-server" Feb 18 11:51:32 crc kubenswrapper[4922]: E0218 11:51:32.929329 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerName="extract-content" Feb 18 11:51:32 crc kubenswrapper[4922]: I0218 11:51:32.929434 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerName="extract-content" Feb 18 11:51:32 crc kubenswrapper[4922]: I0218 11:51:32.929688 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" containerName="registry-server" Feb 18 11:51:32 crc kubenswrapper[4922]: I0218 11:51:32.930979 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:32 crc kubenswrapper[4922]: I0218 11:51:32.967318 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p58lq"] Feb 18 11:51:32 crc kubenswrapper[4922]: I0218 11:51:32.986604 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a7f927-eadd-4bed-85f2-4347306f598f" path="/var/lib/kubelet/pods/c0a7f927-eadd-4bed-85f2-4347306f598f/volumes" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.052651 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-catalog-content\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.052735 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cbf5\" (UniqueName: \"kubernetes.io/projected/db4115a8-0d17-481f-8dee-87d0cb403c71-kube-api-access-6cbf5\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.052836 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-utilities\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.154659 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-catalog-content\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.154792 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cbf5\" (UniqueName: \"kubernetes.io/projected/db4115a8-0d17-481f-8dee-87d0cb403c71-kube-api-access-6cbf5\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.155281 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-catalog-content\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.155813 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-utilities\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.156297 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-utilities\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.185728 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cbf5\" (UniqueName: \"kubernetes.io/projected/db4115a8-0d17-481f-8dee-87d0cb403c71-kube-api-access-6cbf5\") pod \"redhat-marketplace-p58lq\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.266328 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:33 crc kubenswrapper[4922]: I0218 11:51:33.526435 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p58lq"] Feb 18 11:51:34 crc kubenswrapper[4922]: I0218 11:51:34.007736 4922 generic.go:334] "Generic (PLEG): container finished" podID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerID="d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a" exitCode=0 Feb 18 11:51:34 crc kubenswrapper[4922]: I0218 11:51:34.007778 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p58lq" event={"ID":"db4115a8-0d17-481f-8dee-87d0cb403c71","Type":"ContainerDied","Data":"d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a"} Feb 18 11:51:34 crc kubenswrapper[4922]: I0218 11:51:34.007804 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p58lq" event={"ID":"db4115a8-0d17-481f-8dee-87d0cb403c71","Type":"ContainerStarted","Data":"da79be7f7e5f68550d58344941e1e73d8c70455fdf2e899d8d747aef7496924b"} Feb 18 11:51:35 crc kubenswrapper[4922]: I0218 11:51:35.017884 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk7cv" event={"ID":"53648a02-b284-431d-8ad7-11d9633b0149","Type":"ContainerStarted","Data":"878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9"} Feb 18 11:51:35 crc kubenswrapper[4922]: I0218 11:51:35.022059 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p58lq" event={"ID":"db4115a8-0d17-481f-8dee-87d0cb403c71","Type":"ContainerStarted","Data":"862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c"} Feb 18 11:51:35 crc kubenswrapper[4922]: I0218 11:51:35.053260 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bk7cv" podStartSLOduration=3.095865188 podStartE2EDuration="7.053245112s" podCreationTimestamp="2026-02-18 11:51:28 +0000 UTC" firstStartedPulling="2026-02-18 11:51:29.960084341 +0000 UTC m=+891.687788421" lastFinishedPulling="2026-02-18 11:51:33.917464265 +0000 UTC m=+895.645168345" observedRunningTime="2026-02-18 11:51:35.048341208 +0000 UTC m=+896.776045298" watchObservedRunningTime="2026-02-18 11:51:35.053245112 +0000 UTC m=+896.780949192" Feb 18 11:51:36 crc kubenswrapper[4922]: I0218 11:51:36.034149 4922 generic.go:334] "Generic (PLEG): container finished" podID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerID="862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c" exitCode=0 Feb 18 11:51:36 crc kubenswrapper[4922]: I0218 11:51:36.034270 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p58lq" event={"ID":"db4115a8-0d17-481f-8dee-87d0cb403c71","Type":"ContainerDied","Data":"862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c"} Feb 18 11:51:38 crc kubenswrapper[4922]: I0218 11:51:38.050127 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p58lq" event={"ID":"db4115a8-0d17-481f-8dee-87d0cb403c71","Type":"ContainerStarted","Data":"6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a"} Feb 18 11:51:38 crc kubenswrapper[4922]: I0218 11:51:38.073277 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p58lq" podStartSLOduration=3.309403044 podStartE2EDuration="6.073255944s" podCreationTimestamp="2026-02-18 11:51:32 +0000 UTC" firstStartedPulling="2026-02-18 11:51:34.009537438 +0000 UTC m=+895.737241518" lastFinishedPulling="2026-02-18 11:51:36.773390328 +0000 UTC m=+898.501094418" observedRunningTime="2026-02-18 11:51:38.07032234 +0000 UTC m=+899.798026420" watchObservedRunningTime="2026-02-18 11:51:38.073255944 +0000 UTC m=+899.800960024" Feb 18 11:51:38 crc kubenswrapper[4922]: I0218 11:51:38.837352 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:38 crc kubenswrapper[4922]: I0218 11:51:38.837440 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:38 crc kubenswrapper[4922]: I0218 11:51:38.926033 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:39 crc kubenswrapper[4922]: I0218 11:51:39.100641 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:39 crc kubenswrapper[4922]: I0218 11:51:39.911541 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bk7cv"] Feb 18 11:51:41 crc kubenswrapper[4922]: I0218 11:51:41.069990 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bk7cv" podUID="53648a02-b284-431d-8ad7-11d9633b0149" containerName="registry-server" containerID="cri-o://878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9" gracePeriod=2 Feb 18 11:51:41 crc kubenswrapper[4922]: I0218 11:51:41.954955 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.077756 4922 generic.go:334] "Generic (PLEG): container finished" podID="53648a02-b284-431d-8ad7-11d9633b0149" containerID="878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9" exitCode=0 Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.077807 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk7cv" event={"ID":"53648a02-b284-431d-8ad7-11d9633b0149","Type":"ContainerDied","Data":"878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9"} Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.077834 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bk7cv" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.077853 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bk7cv" event={"ID":"53648a02-b284-431d-8ad7-11d9633b0149","Type":"ContainerDied","Data":"50d1a8b8ee20e9292f4d6b1978fd5e8273fa913669322f1ad46ae1142887a476"} Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.077871 4922 scope.go:117] "RemoveContainer" containerID="878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.093281 4922 scope.go:117] "RemoveContainer" containerID="ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.096037 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-utilities\") pod \"53648a02-b284-431d-8ad7-11d9633b0149\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.096088 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sn6t\" (UniqueName: \"kubernetes.io/projected/53648a02-b284-431d-8ad7-11d9633b0149-kube-api-access-6sn6t\") pod \"53648a02-b284-431d-8ad7-11d9633b0149\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.096178 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-catalog-content\") pod \"53648a02-b284-431d-8ad7-11d9633b0149\" (UID: \"53648a02-b284-431d-8ad7-11d9633b0149\") " Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.096966 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-utilities" (OuterVolumeSpecName: "utilities") pod "53648a02-b284-431d-8ad7-11d9633b0149" (UID: "53648a02-b284-431d-8ad7-11d9633b0149"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.104900 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53648a02-b284-431d-8ad7-11d9633b0149-kube-api-access-6sn6t" (OuterVolumeSpecName: "kube-api-access-6sn6t") pod "53648a02-b284-431d-8ad7-11d9633b0149" (UID: "53648a02-b284-431d-8ad7-11d9633b0149"). InnerVolumeSpecName "kube-api-access-6sn6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.124899 4922 scope.go:117] "RemoveContainer" containerID="c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.158741 4922 scope.go:117] "RemoveContainer" containerID="878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9" Feb 18 11:51:42 crc kubenswrapper[4922]: E0218 11:51:42.159294 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9\": container with ID starting with 878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9 not found: ID does not exist" containerID="878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.159338 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9"} err="failed to get container status \"878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9\": rpc error: code = NotFound desc = could not find container \"878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9\": container with ID starting with 878c0580bbc8862d88a8e46bf5f5871de16dcdeb37ec26cd61e5cfea1dbd4ed9 not found: ID does not exist" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.159392 4922 scope.go:117] "RemoveContainer" containerID="ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9" Feb 18 11:51:42 crc kubenswrapper[4922]: E0218 11:51:42.159929 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9\": container with ID starting with ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9 not found: ID does not exist" containerID="ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.159959 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9"} err="failed to get container status \"ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9\": rpc error: code = NotFound desc = could not find container \"ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9\": container with ID starting with ec4ff970101ce6b6fb9d0d73cbeb58c38bed6a2f72f5b7ed86b2120b52d04ca9 not found: ID does not exist" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.159976 4922 scope.go:117] "RemoveContainer" containerID="c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4" Feb 18 11:51:42 crc kubenswrapper[4922]: E0218 11:51:42.160428 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4\": container with ID starting with c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4 not found: ID does not exist" containerID="c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.160489 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4"} err="failed to get container status \"c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4\": rpc error: code = NotFound desc = could not find container \"c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4\": container with ID starting with c290c7904b39cf38f89a34e85673fbbb9bef3d95a11b5f7e3b6bf2302fec2ca4 not found: ID does not exist" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.163752 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53648a02-b284-431d-8ad7-11d9633b0149" (UID: "53648a02-b284-431d-8ad7-11d9633b0149"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.198176 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.198213 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sn6t\" (UniqueName: \"kubernetes.io/projected/53648a02-b284-431d-8ad7-11d9633b0149-kube-api-access-6sn6t\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.198224 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53648a02-b284-431d-8ad7-11d9633b0149-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.405745 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bk7cv"] Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.410687 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bk7cv"] Feb 18 11:51:42 crc kubenswrapper[4922]: I0218 11:51:42.981799 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53648a02-b284-431d-8ad7-11d9633b0149" path="/var/lib/kubelet/pods/53648a02-b284-431d-8ad7-11d9633b0149/volumes" Feb 18 11:51:43 crc kubenswrapper[4922]: I0218 11:51:43.267251 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:43 crc kubenswrapper[4922]: I0218 11:51:43.267302 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:43 crc kubenswrapper[4922]: I0218 11:51:43.318303 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:44 crc kubenswrapper[4922]: I0218 11:51:44.148694 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:45 crc kubenswrapper[4922]: I0218 11:51:45.317520 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p58lq"] Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.105609 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p58lq" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerName="registry-server" containerID="cri-o://6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a" gracePeriod=2 Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.546764 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.657057 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-catalog-content\") pod \"db4115a8-0d17-481f-8dee-87d0cb403c71\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.657117 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cbf5\" (UniqueName: \"kubernetes.io/projected/db4115a8-0d17-481f-8dee-87d0cb403c71-kube-api-access-6cbf5\") pod \"db4115a8-0d17-481f-8dee-87d0cb403c71\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.657169 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-utilities\") pod \"db4115a8-0d17-481f-8dee-87d0cb403c71\" (UID: \"db4115a8-0d17-481f-8dee-87d0cb403c71\") " Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.658189 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-utilities" (OuterVolumeSpecName: "utilities") pod "db4115a8-0d17-481f-8dee-87d0cb403c71" (UID: "db4115a8-0d17-481f-8dee-87d0cb403c71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.664597 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4115a8-0d17-481f-8dee-87d0cb403c71-kube-api-access-6cbf5" (OuterVolumeSpecName: "kube-api-access-6cbf5") pod "db4115a8-0d17-481f-8dee-87d0cb403c71" (UID: "db4115a8-0d17-481f-8dee-87d0cb403c71"). InnerVolumeSpecName "kube-api-access-6cbf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.698977 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db4115a8-0d17-481f-8dee-87d0cb403c71" (UID: "db4115a8-0d17-481f-8dee-87d0cb403c71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.758845 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.759199 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cbf5\" (UniqueName: \"kubernetes.io/projected/db4115a8-0d17-481f-8dee-87d0cb403c71-kube-api-access-6cbf5\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:46 crc kubenswrapper[4922]: I0218 11:51:46.759283 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db4115a8-0d17-481f-8dee-87d0cb403c71-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.114324 4922 generic.go:334] "Generic (PLEG): container finished" podID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerID="6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a" exitCode=0 Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.114386 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p58lq" event={"ID":"db4115a8-0d17-481f-8dee-87d0cb403c71","Type":"ContainerDied","Data":"6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a"} Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.114413 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p58lq" event={"ID":"db4115a8-0d17-481f-8dee-87d0cb403c71","Type":"ContainerDied","Data":"da79be7f7e5f68550d58344941e1e73d8c70455fdf2e899d8d747aef7496924b"} Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.114429 4922 scope.go:117] "RemoveContainer" containerID="6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.114775 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p58lq" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.134754 4922 scope.go:117] "RemoveContainer" containerID="862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.142686 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p58lq"] Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.150190 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p58lq"] Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.165701 4922 scope.go:117] "RemoveContainer" containerID="d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.184700 4922 scope.go:117] "RemoveContainer" containerID="6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a" Feb 18 11:51:47 crc kubenswrapper[4922]: E0218 11:51:47.185238 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a\": container with ID starting with 6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a not found: ID does not exist" containerID="6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.185290 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a"} err="failed to get container status \"6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a\": rpc error: code = NotFound desc = could not find container \"6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a\": container with ID starting with 6062a677bdd3c0fe63d63f3e2d2ffd7e256189a1991d655f3cbb250dc14f3a2a not found: ID does not exist" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.185325 4922 scope.go:117] "RemoveContainer" containerID="862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c" Feb 18 11:51:47 crc kubenswrapper[4922]: E0218 11:51:47.185713 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c\": container with ID starting with 862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c not found: ID does not exist" containerID="862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.185751 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c"} err="failed to get container status \"862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c\": rpc error: code = NotFound desc = could not find container \"862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c\": container with ID starting with 862386f6582f32e25d28a0f4f7fa1fed459b4fd5dd51c2ac625bc35c25f9ad0c not found: ID does not exist" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.185780 4922 scope.go:117] "RemoveContainer" containerID="d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a" Feb 18 11:51:47 crc kubenswrapper[4922]: E0218 11:51:47.186037 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a\": container with ID starting with d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a not found: ID does not exist" containerID="d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a" Feb 18 11:51:47 crc kubenswrapper[4922]: I0218 11:51:47.186073 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a"} err="failed to get container status \"d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a\": rpc error: code = NotFound desc = could not find container \"d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a\": container with ID starting with d3d937b23ea0593597efdc3795dc6f7ac95dc8a38af4b1dd12165bb613b69c2a not found: ID does not exist" Feb 18 11:51:48 crc kubenswrapper[4922]: I0218 11:51:48.981126 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" path="/var/lib/kubelet/pods/db4115a8-0d17-481f-8dee-87d0cb403c71/volumes" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.764663 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk"] Feb 18 11:51:51 crc kubenswrapper[4922]: E0218 11:51:51.765592 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerName="registry-server" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.765607 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerName="registry-server" Feb 18 11:51:51 crc kubenswrapper[4922]: E0218 11:51:51.765633 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53648a02-b284-431d-8ad7-11d9633b0149" containerName="extract-content" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.765641 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53648a02-b284-431d-8ad7-11d9633b0149" containerName="extract-content" Feb 18 11:51:51 crc kubenswrapper[4922]: E0218 11:51:51.765653 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53648a02-b284-431d-8ad7-11d9633b0149" containerName="extract-utilities" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.765661 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53648a02-b284-431d-8ad7-11d9633b0149" containerName="extract-utilities" Feb 18 11:51:51 crc kubenswrapper[4922]: E0218 11:51:51.765672 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerName="extract-content" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.765679 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerName="extract-content" Feb 18 11:51:51 crc kubenswrapper[4922]: E0218 11:51:51.765689 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerName="extract-utilities" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.765696 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerName="extract-utilities" Feb 18 11:51:51 crc kubenswrapper[4922]: E0218 11:51:51.765705 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53648a02-b284-431d-8ad7-11d9633b0149" containerName="registry-server" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.765711 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53648a02-b284-431d-8ad7-11d9633b0149" containerName="registry-server" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.765851 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4115a8-0d17-481f-8dee-87d0cb403c71" containerName="registry-server" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.765875 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="53648a02-b284-431d-8ad7-11d9633b0149" containerName="registry-server" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.766387 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.770118 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zqd4d" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.780949 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.781934 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.785687 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nhx4z" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.805666 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.807244 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.825467 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ktdtd" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.851301 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.891012 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.908235 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.914538 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.915419 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.923076 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.923142 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wscdl" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.924112 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.932875 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-94cg6" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.940104 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fnf\" (UniqueName: \"kubernetes.io/projected/01766bee-50bd-4dcb-9b3d-831486ddeaf4-kube-api-access-z5fnf\") pod \"designate-operator-controller-manager-6d8bf5c495-2ncv8\" (UID: \"01766bee-50bd-4dcb-9b3d-831486ddeaf4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.940183 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6pt7\" (UniqueName: \"kubernetes.io/projected/ae81863a-2778-4505-9106-c850f873a75d-kube-api-access-f6pt7\") pod \"barbican-operator-controller-manager-868647ff47-f8lbk\" (UID: \"ae81863a-2778-4505-9106-c850f873a75d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.940220 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpq2\" (UniqueName: \"kubernetes.io/projected/61f73f1d-e472-411e-adc0-6755c47aa72b-kube-api-access-zzpq2\") pod \"cinder-operator-controller-manager-5d946d989d-6z2cq\" (UID: \"61f73f1d-e472-411e-adc0-6755c47aa72b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.940288 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.960192 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.971090 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr"] Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.972152 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.979771 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-p5tfh" Feb 18 11:51:51 crc kubenswrapper[4922]: I0218 11:51:51.986777 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.000993 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-krt25"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.002212 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.005188 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9prpc" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.005446 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.042081 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.043136 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.045336 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8w9l\" (UniqueName: \"kubernetes.io/projected/51cd14ee-9b8a-421f-80bb-d208b752079d-kube-api-access-z8w9l\") pod \"heat-operator-controller-manager-69f49c598c-qm24h\" (UID: \"51cd14ee-9b8a-421f-80bb-d208b752079d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.045418 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6pt7\" (UniqueName: \"kubernetes.io/projected/ae81863a-2778-4505-9106-c850f873a75d-kube-api-access-f6pt7\") pod \"barbican-operator-controller-manager-868647ff47-f8lbk\" (UID: \"ae81863a-2778-4505-9106-c850f873a75d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.045467 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpq2\" (UniqueName: \"kubernetes.io/projected/61f73f1d-e472-411e-adc0-6755c47aa72b-kube-api-access-zzpq2\") pod \"cinder-operator-controller-manager-5d946d989d-6z2cq\" (UID: \"61f73f1d-e472-411e-adc0-6755c47aa72b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.045523 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8k6q\" (UniqueName: \"kubernetes.io/projected/4c9af0bf-50d7-42ef-a8df-241b5ec63f5a-kube-api-access-f8k6q\") pod \"glance-operator-controller-manager-77987464f4-bnvrn\" (UID: \"4c9af0bf-50d7-42ef-a8df-241b5ec63f5a\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.045558 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5fnf\" (UniqueName: \"kubernetes.io/projected/01766bee-50bd-4dcb-9b3d-831486ddeaf4-kube-api-access-z5fnf\") pod \"designate-operator-controller-manager-6d8bf5c495-2ncv8\" (UID: \"01766bee-50bd-4dcb-9b3d-831486ddeaf4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.050460 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5h586" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.055461 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.056270 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.072315 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-krt25"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.073208 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5fnf\" (UniqueName: \"kubernetes.io/projected/01766bee-50bd-4dcb-9b3d-831486ddeaf4-kube-api-access-z5fnf\") pod \"designate-operator-controller-manager-6d8bf5c495-2ncv8\" (UID: \"01766bee-50bd-4dcb-9b3d-831486ddeaf4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.073616 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-r7jsl" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.079583 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.081994 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6pt7\" (UniqueName: \"kubernetes.io/projected/ae81863a-2778-4505-9106-c850f873a75d-kube-api-access-f6pt7\") pod \"barbican-operator-controller-manager-868647ff47-f8lbk\" (UID: \"ae81863a-2778-4505-9106-c850f873a75d\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.088413 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.089211 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.089749 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.093628 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-c597h"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.094730 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.102029 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-bmlmc" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.102260 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-b5qxz" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.102285 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.104485 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpq2\" (UniqueName: \"kubernetes.io/projected/61f73f1d-e472-411e-adc0-6755c47aa72b-kube-api-access-zzpq2\") pod \"cinder-operator-controller-manager-5d946d989d-6z2cq\" (UID: \"61f73f1d-e472-411e-adc0-6755c47aa72b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.126789 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.139887 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.147639 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.147746 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8k6q\" (UniqueName: \"kubernetes.io/projected/4c9af0bf-50d7-42ef-a8df-241b5ec63f5a-kube-api-access-f8k6q\") pod \"glance-operator-controller-manager-77987464f4-bnvrn\" (UID: \"4c9af0bf-50d7-42ef-a8df-241b5ec63f5a\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.147809 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55cl4\" (UniqueName: \"kubernetes.io/projected/3c16d873-1097-4f56-913f-cc366ed34c23-kube-api-access-55cl4\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.147849 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvn5\" (UniqueName: \"kubernetes.io/projected/0032092e-84ca-426d-8f15-5141f4a8da20-kube-api-access-rlvn5\") pod \"horizon-operator-controller-manager-5b9b8895d5-82hvr\" (UID: \"0032092e-84ca-426d-8f15-5141f4a8da20\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.147910 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvwl\" (UniqueName: \"kubernetes.io/projected/324031ff-ceae-4065-9955-fd5745647cb0-kube-api-access-kfvwl\") pod \"keystone-operator-controller-manager-b4d948c87-jtfzr\" (UID: \"324031ff-ceae-4065-9955-fd5745647cb0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.147937 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpjgd\" (UniqueName: \"kubernetes.io/projected/7753280d-fc59-4887-9d87-a2cfd83e7ba9-kube-api-access-xpjgd\") pod \"ironic-operator-controller-manager-554564d7fc-r4v59\" (UID: \"7753280d-fc59-4887-9d87-a2cfd83e7ba9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.147971 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8w9l\" (UniqueName: \"kubernetes.io/projected/51cd14ee-9b8a-421f-80bb-d208b752079d-kube-api-access-z8w9l\") pod \"heat-operator-controller-manager-69f49c598c-qm24h\" (UID: \"51cd14ee-9b8a-421f-80bb-d208b752079d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.157097 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.175437 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-c597h"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.192989 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.194119 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.202868 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-z5wq8" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.205219 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8w9l\" (UniqueName: \"kubernetes.io/projected/51cd14ee-9b8a-421f-80bb-d208b752079d-kube-api-access-z8w9l\") pod \"heat-operator-controller-manager-69f49c598c-qm24h\" (UID: \"51cd14ee-9b8a-421f-80bb-d208b752079d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.205285 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.205563 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8k6q\" (UniqueName: \"kubernetes.io/projected/4c9af0bf-50d7-42ef-a8df-241b5ec63f5a-kube-api-access-f8k6q\") pod \"glance-operator-controller-manager-77987464f4-bnvrn\" (UID: \"4c9af0bf-50d7-42ef-a8df-241b5ec63f5a\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.206086 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.213154 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xwfl6" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.236974 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.241752 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.243611 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.246637 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tbnxp" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.249235 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.255089 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55cl4\" (UniqueName: \"kubernetes.io/projected/3c16d873-1097-4f56-913f-cc366ed34c23-kube-api-access-55cl4\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.255132 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlvn5\" (UniqueName: \"kubernetes.io/projected/0032092e-84ca-426d-8f15-5141f4a8da20-kube-api-access-rlvn5\") pod \"horizon-operator-controller-manager-5b9b8895d5-82hvr\" (UID: \"0032092e-84ca-426d-8f15-5141f4a8da20\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.255194 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghd7h\" (UniqueName: \"kubernetes.io/projected/2936db6d-8a5b-4da8-9e52-e508a6e757fe-kube-api-access-ghd7h\") pod \"manila-operator-controller-manager-54f6768c69-c597h\" (UID: \"2936db6d-8a5b-4da8-9e52-e508a6e757fe\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.255228 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvwl\" (UniqueName: \"kubernetes.io/projected/324031ff-ceae-4065-9955-fd5745647cb0-kube-api-access-kfvwl\") pod \"keystone-operator-controller-manager-b4d948c87-jtfzr\" (UID: \"324031ff-ceae-4065-9955-fd5745647cb0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.255262 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpjgd\" (UniqueName: \"kubernetes.io/projected/7753280d-fc59-4887-9d87-a2cfd83e7ba9-kube-api-access-xpjgd\") pod \"ironic-operator-controller-manager-554564d7fc-r4v59\" (UID: \"7753280d-fc59-4887-9d87-a2cfd83e7ba9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.255310 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfkh\" (UniqueName: \"kubernetes.io/projected/0a8811b6-4023-427d-a893-628e0dd338e8-kube-api-access-bzfkh\") pod \"mariadb-operator-controller-manager-6994f66f48-tn47v\" (UID: \"0a8811b6-4023-427d-a893-628e0dd338e8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.255415 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:52 crc kubenswrapper[4922]: E0218 11:51:52.255570 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:52 crc kubenswrapper[4922]: E0218 11:51:52.255633 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert podName:3c16d873-1097-4f56-913f-cc366ed34c23 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.755610735 +0000 UTC m=+914.483314815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert") pod "infra-operator-controller-manager-79d975b745-krt25" (UID: "3c16d873-1097-4f56-913f-cc366ed34c23") : secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.257923 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.267544 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.281380 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.283633 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvwl\" (UniqueName: \"kubernetes.io/projected/324031ff-ceae-4065-9955-fd5745647cb0-kube-api-access-kfvwl\") pod \"keystone-operator-controller-manager-b4d948c87-jtfzr\" (UID: \"324031ff-ceae-4065-9955-fd5745647cb0\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.286218 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpjgd\" (UniqueName: \"kubernetes.io/projected/7753280d-fc59-4887-9d87-a2cfd83e7ba9-kube-api-access-xpjgd\") pod \"ironic-operator-controller-manager-554564d7fc-r4v59\" (UID: \"7753280d-fc59-4887-9d87-a2cfd83e7ba9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.290401 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlvn5\" (UniqueName: \"kubernetes.io/projected/0032092e-84ca-426d-8f15-5141f4a8da20-kube-api-access-rlvn5\") pod \"horizon-operator-controller-manager-5b9b8895d5-82hvr\" (UID: \"0032092e-84ca-426d-8f15-5141f4a8da20\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.301723 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.302928 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.303497 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.304812 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.305055 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-dbfjs" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.305621 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.306765 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55cl4\" (UniqueName: \"kubernetes.io/projected/3c16d873-1097-4f56-913f-cc366ed34c23-kube-api-access-55cl4\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.307937 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-bttjx" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.377194 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghd7h\" (UniqueName: \"kubernetes.io/projected/2936db6d-8a5b-4da8-9e52-e508a6e757fe-kube-api-access-ghd7h\") pod \"manila-operator-controller-manager-54f6768c69-c597h\" (UID: \"2936db6d-8a5b-4da8-9e52-e508a6e757fe\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.377338 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrbmr\" (UniqueName: \"kubernetes.io/projected/a7487625-0c9e-4396-8eb8-5840ce4344c8-kube-api-access-lrbmr\") pod \"neutron-operator-controller-manager-64ddbf8bb-gwbk7\" (UID: \"a7487625-0c9e-4396-8eb8-5840ce4344c8\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.379528 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzfkh\" (UniqueName: \"kubernetes.io/projected/0a8811b6-4023-427d-a893-628e0dd338e8-kube-api-access-bzfkh\") pod \"mariadb-operator-controller-manager-6994f66f48-tn47v\" (UID: \"0a8811b6-4023-427d-a893-628e0dd338e8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.380535 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngz5n\" (UniqueName: \"kubernetes.io/projected/8eae5053-64f3-401a-a151-dbf22f30a845-kube-api-access-ngz5n\") pod \"octavia-operator-controller-manager-69f8888797-4fm4m\" (UID: \"8eae5053-64f3-401a-a151-dbf22f30a845\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.380592 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57qx\" (UniqueName: \"kubernetes.io/projected/90b4a58a-81d7-4129-8f45-5429e963676e-kube-api-access-l57qx\") pod \"nova-operator-controller-manager-567668f5cf-wrd8w\" (UID: \"90b4a58a-81d7-4129-8f45-5429e963676e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.391945 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.394707 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.409732 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.414185 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xgr9v" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.421339 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzfkh\" (UniqueName: \"kubernetes.io/projected/0a8811b6-4023-427d-a893-628e0dd338e8-kube-api-access-bzfkh\") pod \"mariadb-operator-controller-manager-6994f66f48-tn47v\" (UID: \"0a8811b6-4023-427d-a893-628e0dd338e8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.446203 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghd7h\" (UniqueName: \"kubernetes.io/projected/2936db6d-8a5b-4da8-9e52-e508a6e757fe-kube-api-access-ghd7h\") pod \"manila-operator-controller-manager-54f6768c69-c597h\" (UID: \"2936db6d-8a5b-4da8-9e52-e508a6e757fe\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.448262 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.459595 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.460141 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.461297 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.467857 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-twl8v" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.472318 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.481407 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.486176 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hczzm\" (UniqueName: \"kubernetes.io/projected/66351682-3cdf-41cc-80d9-0bbb020144d2-kube-api-access-hczzm\") pod \"placement-operator-controller-manager-8497b45c89-hddmr\" (UID: \"66351682-3cdf-41cc-80d9-0bbb020144d2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.486271 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngz5n\" (UniqueName: \"kubernetes.io/projected/8eae5053-64f3-401a-a151-dbf22f30a845-kube-api-access-ngz5n\") pod \"octavia-operator-controller-manager-69f8888797-4fm4m\" (UID: \"8eae5053-64f3-401a-a151-dbf22f30a845\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.486303 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l57qx\" (UniqueName: \"kubernetes.io/projected/90b4a58a-81d7-4129-8f45-5429e963676e-kube-api-access-l57qx\") pod \"nova-operator-controller-manager-567668f5cf-wrd8w\" (UID: \"90b4a58a-81d7-4129-8f45-5429e963676e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.486517 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrbmr\" (UniqueName: \"kubernetes.io/projected/a7487625-0c9e-4396-8eb8-5840ce4344c8-kube-api-access-lrbmr\") pod \"neutron-operator-controller-manager-64ddbf8bb-gwbk7\" (UID: \"a7487625-0c9e-4396-8eb8-5840ce4344c8\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.486551 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmqv5\" (UniqueName: \"kubernetes.io/projected/42271b89-6aba-4e15-a2a1-856b656a1b6e-kube-api-access-qmqv5\") pod \"ovn-operator-controller-manager-d44cf6b75-z7pdl\" (UID: \"42271b89-6aba-4e15-a2a1-856b656a1b6e\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.508701 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.514465 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.515588 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.517158 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l57qx\" (UniqueName: \"kubernetes.io/projected/90b4a58a-81d7-4129-8f45-5429e963676e-kube-api-access-l57qx\") pod \"nova-operator-controller-manager-567668f5cf-wrd8w\" (UID: \"90b4a58a-81d7-4129-8f45-5429e963676e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.521292 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngz5n\" (UniqueName: \"kubernetes.io/projected/8eae5053-64f3-401a-a151-dbf22f30a845-kube-api-access-ngz5n\") pod \"octavia-operator-controller-manager-69f8888797-4fm4m\" (UID: \"8eae5053-64f3-401a-a151-dbf22f30a845\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.522084 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9jtzj" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.527251 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.530660 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrbmr\" (UniqueName: \"kubernetes.io/projected/a7487625-0c9e-4396-8eb8-5840ce4344c8-kube-api-access-lrbmr\") pod \"neutron-operator-controller-manager-64ddbf8bb-gwbk7\" (UID: \"a7487625-0c9e-4396-8eb8-5840ce4344c8\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.553353 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.572496 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.583956 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.588583 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmqv5\" (UniqueName: \"kubernetes.io/projected/42271b89-6aba-4e15-a2a1-856b656a1b6e-kube-api-access-qmqv5\") pod \"ovn-operator-controller-manager-d44cf6b75-z7pdl\" (UID: \"42271b89-6aba-4e15-a2a1-856b656a1b6e\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.588633 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hczzm\" (UniqueName: \"kubernetes.io/projected/66351682-3cdf-41cc-80d9-0bbb020144d2-kube-api-access-hczzm\") pod \"placement-operator-controller-manager-8497b45c89-hddmr\" (UID: \"66351682-3cdf-41cc-80d9-0bbb020144d2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.588663 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6xp\" (UniqueName: \"kubernetes.io/projected/081d9ec7-e338-437a-b3bc-af9b788db66a-kube-api-access-tc6xp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.588717 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.588774 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6pl\" (UniqueName: \"kubernetes.io/projected/387afbf1-afa5-414c-a22a-83a6a8197ff7-kube-api-access-zn6pl\") pod \"telemetry-operator-controller-manager-7f45b4ff68-btlqf\" (UID: \"387afbf1-afa5-414c-a22a-83a6a8197ff7\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.588798 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntkx\" (UniqueName: \"kubernetes.io/projected/183b09db-ca5a-4aa1-b87b-908de4dc44ff-kube-api-access-nntkx\") pod \"swift-operator-controller-manager-68f46476f-2bk9r\" (UID: \"183b09db-ca5a-4aa1-b87b-908de4dc44ff\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.597489 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.609673 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmqv5\" (UniqueName: \"kubernetes.io/projected/42271b89-6aba-4e15-a2a1-856b656a1b6e-kube-api-access-qmqv5\") pod \"ovn-operator-controller-manager-d44cf6b75-z7pdl\" (UID: \"42271b89-6aba-4e15-a2a1-856b656a1b6e\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.609766 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hczzm\" (UniqueName: \"kubernetes.io/projected/66351682-3cdf-41cc-80d9-0bbb020144d2-kube-api-access-hczzm\") pod \"placement-operator-controller-manager-8497b45c89-hddmr\" (UID: \"66351682-3cdf-41cc-80d9-0bbb020144d2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.620585 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-xdrrr"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.621939 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.624244 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xhz5c" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.627640 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-xdrrr"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.650979 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.652397 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.658801 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.661045 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.661055 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qhg5z" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.689418 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.689474 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn6pl\" (UniqueName: \"kubernetes.io/projected/387afbf1-afa5-414c-a22a-83a6a8197ff7-kube-api-access-zn6pl\") pod \"telemetry-operator-controller-manager-7f45b4ff68-btlqf\" (UID: \"387afbf1-afa5-414c-a22a-83a6a8197ff7\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.689499 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nntkx\" (UniqueName: \"kubernetes.io/projected/183b09db-ca5a-4aa1-b87b-908de4dc44ff-kube-api-access-nntkx\") pod \"swift-operator-controller-manager-68f46476f-2bk9r\" (UID: \"183b09db-ca5a-4aa1-b87b-908de4dc44ff\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.689543 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7k5\" (UniqueName: \"kubernetes.io/projected/4c487619-568f-44a0-9d23-037794ada114-kube-api-access-nc7k5\") pod \"watcher-operator-controller-manager-5689f5d7c4-95x8t\" (UID: \"4c487619-568f-44a0-9d23-037794ada114\") " pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.689578 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc6xp\" (UniqueName: \"kubernetes.io/projected/081d9ec7-e338-437a-b3bc-af9b788db66a-kube-api-access-tc6xp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.689623 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gzz8\" (UniqueName: \"kubernetes.io/projected/52123256-1372-49b6-80ed-c3112d14a8fa-kube-api-access-4gzz8\") pod \"test-operator-controller-manager-7866795846-xdrrr\" (UID: \"52123256-1372-49b6-80ed-c3112d14a8fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" Feb 18 11:51:52 crc kubenswrapper[4922]: E0218 11:51:52.689736 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:51:52 crc kubenswrapper[4922]: E0218 11:51:52.689776 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert podName:081d9ec7-e338-437a-b3bc-af9b788db66a nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.189762666 +0000 UTC m=+914.917466746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" (UID: "081d9ec7-e338-437a-b3bc-af9b788db66a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.695343 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.695964 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.712793 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn6pl\" (UniqueName: \"kubernetes.io/projected/387afbf1-afa5-414c-a22a-83a6a8197ff7-kube-api-access-zn6pl\") pod \"telemetry-operator-controller-manager-7f45b4ff68-btlqf\" (UID: \"387afbf1-afa5-414c-a22a-83a6a8197ff7\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.713277 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc6xp\" (UniqueName: \"kubernetes.io/projected/081d9ec7-e338-437a-b3bc-af9b788db66a-kube-api-access-tc6xp\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.714217 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nntkx\" (UniqueName: \"kubernetes.io/projected/183b09db-ca5a-4aa1-b87b-908de4dc44ff-kube-api-access-nntkx\") pod \"swift-operator-controller-manager-68f46476f-2bk9r\" (UID: \"183b09db-ca5a-4aa1-b87b-908de4dc44ff\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.717759 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.741923 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.743658 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.747701 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.747752 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9rlc5" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.747972 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.763269 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.788300 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.790824 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.790859 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc7k5\" (UniqueName: \"kubernetes.io/projected/4c487619-568f-44a0-9d23-037794ada114-kube-api-access-nc7k5\") pod \"watcher-operator-controller-manager-5689f5d7c4-95x8t\" (UID: \"4c487619-568f-44a0-9d23-037794ada114\") " pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.790936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.790970 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gzz8\" (UniqueName: \"kubernetes.io/projected/52123256-1372-49b6-80ed-c3112d14a8fa-kube-api-access-4gzz8\") pod \"test-operator-controller-manager-7866795846-xdrrr\" (UID: \"52123256-1372-49b6-80ed-c3112d14a8fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" Feb 18 11:51:52 crc kubenswrapper[4922]: E0218 11:51:52.791684 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:52 crc kubenswrapper[4922]: E0218 11:51:52.791909 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert podName:3c16d873-1097-4f56-913f-cc366ed34c23 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.791893122 +0000 UTC m=+915.519597202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert") pod "infra-operator-controller-manager-79d975b745-krt25" (UID: "3c16d873-1097-4f56-913f-cc366ed34c23") : secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.794564 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-b29wn" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.795238 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.801606 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.816746 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc7k5\" (UniqueName: \"kubernetes.io/projected/4c487619-568f-44a0-9d23-037794ada114-kube-api-access-nc7k5\") pod \"watcher-operator-controller-manager-5689f5d7c4-95x8t\" (UID: \"4c487619-568f-44a0-9d23-037794ada114\") " pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.839427 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.843007 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gzz8\" (UniqueName: \"kubernetes.io/projected/52123256-1372-49b6-80ed-c3112d14a8fa-kube-api-access-4gzz8\") pod \"test-operator-controller-manager-7866795846-xdrrr\" (UID: \"52123256-1372-49b6-80ed-c3112d14a8fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.872764 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.894977 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mh6\" (UniqueName: \"kubernetes.io/projected/d81b14bf-a056-4780-af1a-bf38babee5b3-kube-api-access-57mh6\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.895067 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.895093 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvrwt\" (UniqueName: \"kubernetes.io/projected/69ef021e-1b46-4aeb-8023-93f6fb366396-kube-api-access-hvrwt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-98zrv\" (UID: \"69ef021e-1b46-4aeb-8023-93f6fb366396\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.895150 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.909161 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8"] Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.933744 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq"] Feb 18 11:51:52 crc kubenswrapper[4922]: W0218 11:51:52.936468 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01766bee_50bd_4dcb_9b3d_831486ddeaf4.slice/crio-19acb113026a915dd4ce9f4fe1ac57917d836682a2553268a443680fae51107f WatchSource:0}: Error finding container 19acb113026a915dd4ce9f4fe1ac57917d836682a2553268a443680fae51107f: Status 404 returned error can't find the container with id 19acb113026a915dd4ce9f4fe1ac57917d836682a2553268a443680fae51107f Feb 18 11:51:52 crc kubenswrapper[4922]: I0218 11:51:52.945271 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" Feb 18 11:51:52 crc kubenswrapper[4922]: W0218 11:51:52.986558 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61f73f1d_e472_411e_adc0_6755c47aa72b.slice/crio-52ff0b8371c8a99b343de4e6df91d29b03a6e78611a45c73c04ac853e4527c63 WatchSource:0}: Error finding container 52ff0b8371c8a99b343de4e6df91d29b03a6e78611a45c73c04ac853e4527c63: Status 404 returned error can't find the container with id 52ff0b8371c8a99b343de4e6df91d29b03a6e78611a45c73c04ac853e4527c63 Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.000077 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.001348 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.001412 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvrwt\" (UniqueName: \"kubernetes.io/projected/69ef021e-1b46-4aeb-8023-93f6fb366396-kube-api-access-hvrwt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-98zrv\" (UID: \"69ef021e-1b46-4aeb-8023-93f6fb366396\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.001457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.001515 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mh6\" (UniqueName: \"kubernetes.io/projected/d81b14bf-a056-4780-af1a-bf38babee5b3-kube-api-access-57mh6\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.001895 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.001972 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.50195399 +0000 UTC m=+915.229658070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "metrics-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.002240 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.002273 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.502259458 +0000 UTC m=+915.229963538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "webhook-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.038068 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvrwt\" (UniqueName: \"kubernetes.io/projected/69ef021e-1b46-4aeb-8023-93f6fb366396-kube-api-access-hvrwt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-98zrv\" (UID: \"69ef021e-1b46-4aeb-8023-93f6fb366396\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.045753 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mh6\" (UniqueName: \"kubernetes.io/projected/d81b14bf-a056-4780-af1a-bf38babee5b3-kube-api-access-57mh6\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.078872 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn"] Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.107786 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h"] Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.136708 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr"] Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.172458 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.205742 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.205984 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.206047 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert podName:081d9ec7-e338-437a-b3bc-af9b788db66a nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.206029297 +0000 UTC m=+915.933733377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" (UID: "081d9ec7-e338-437a-b3bc-af9b788db66a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.209526 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" event={"ID":"61f73f1d-e472-411e-adc0-6755c47aa72b","Type":"ContainerStarted","Data":"52ff0b8371c8a99b343de4e6df91d29b03a6e78611a45c73c04ac853e4527c63"} Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.211190 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" event={"ID":"ae81863a-2778-4505-9106-c850f873a75d","Type":"ContainerStarted","Data":"d316eb12f438139d42c0ec170cdc8521aebcb89b1d6e0bad06606ee4c60afabe"} Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.212247 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" event={"ID":"4c9af0bf-50d7-42ef-a8df-241b5ec63f5a","Type":"ContainerStarted","Data":"346670c4a29e5cbeb5110a5ec9ae34defeb274394842d7356c5ddb63453b5f3f"} Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.213165 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" event={"ID":"01766bee-50bd-4dcb-9b3d-831486ddeaf4","Type":"ContainerStarted","Data":"19acb113026a915dd4ce9f4fe1ac57917d836682a2553268a443680fae51107f"} Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.234573 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" event={"ID":"51cd14ee-9b8a-421f-80bb-d208b752079d","Type":"ContainerStarted","Data":"b921c092fab28063a92a42657c3522fca4ca2ef3974b76ef9ab0416c1fb67ee8"} Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.252737 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" event={"ID":"0032092e-84ca-426d-8f15-5141f4a8da20","Type":"ContainerStarted","Data":"8d7cceffd7f6603febb5d39ddb325bd8f746cfd19eb6f4b0ff54cf01e0776946"} Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.324882 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v"] Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.334767 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59"] Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.345295 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a8811b6_4023_427d_a893_628e0dd338e8.slice/crio-8558ba6625dea2210b7b67db1e11d44c62b1c4bce4a06664d839c01b9382c622 WatchSource:0}: Error finding container 8558ba6625dea2210b7b67db1e11d44c62b1c4bce4a06664d839c01b9382c622: Status 404 returned error can't find the container with id 8558ba6625dea2210b7b67db1e11d44c62b1c4bce4a06664d839c01b9382c622 Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.347858 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7753280d_fc59_4887_9d87_a2cfd83e7ba9.slice/crio-0b141124e1520dbab2b2ea5158452a8e8b854d2cff468477b9a91b87418c6be6 WatchSource:0}: Error finding container 0b141124e1520dbab2b2ea5158452a8e8b854d2cff468477b9a91b87418c6be6: Status 404 returned error can't find the container with id 0b141124e1520dbab2b2ea5158452a8e8b854d2cff468477b9a91b87418c6be6 Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.509439 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.509574 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.509574 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.509669 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.509646225 +0000 UTC m=+916.237350335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "webhook-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.509746 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.509837 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.50982045 +0000 UTC m=+916.237524530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "metrics-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.561717 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr"] Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.577000 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod324031ff_ceae_4065_9955_fd5745647cb0.slice/crio-791eb39defb8137ec3bf31cfb665d8d3beb61451f0c1d645e315b166828a58a3 WatchSource:0}: Error finding container 791eb39defb8137ec3bf31cfb665d8d3beb61451f0c1d645e315b166828a58a3: Status 404 returned error can't find the container with id 791eb39defb8137ec3bf31cfb665d8d3beb61451f0c1d645e315b166828a58a3 Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.583449 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-c597h"] Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.606864 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w"] Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.610399 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90b4a58a_81d7_4129_8f45_5429e963676e.slice/crio-5cc223c3dbe87c0c0b49f820260318eabbe9d2c5885bfa1b38b08950af80fa5e WatchSource:0}: Error finding container 5cc223c3dbe87c0c0b49f820260318eabbe9d2c5885bfa1b38b08950af80fa5e: Status 404 returned error can't find the container with id 5cc223c3dbe87c0c0b49f820260318eabbe9d2c5885bfa1b38b08950af80fa5e Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.611706 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2936db6d_8a5b_4da8_9e52_e508a6e757fe.slice/crio-1fe78d73db97d66f5edadb46991fad71ebe88ff4c15640b936826b5262f74e49 WatchSource:0}: Error finding container 1fe78d73db97d66f5edadb46991fad71ebe88ff4c15640b936826b5262f74e49: Status 404 returned error can't find the container with id 1fe78d73db97d66f5edadb46991fad71ebe88ff4c15640b936826b5262f74e49 Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.734904 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m"] Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.741293 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod183b09db_ca5a_4aa1_b87b_908de4dc44ff.slice/crio-f9b9316101309ac1a5566770b337b359f68832e29afb6db778bc8e5893f76954 WatchSource:0}: Error finding container f9b9316101309ac1a5566770b337b359f68832e29afb6db778bc8e5893f76954: Status 404 returned error can't find the container with id f9b9316101309ac1a5566770b337b359f68832e29afb6db778bc8e5893f76954 Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.742652 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r"] Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.766827 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl"] Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.813492 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.813671 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.819222 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert podName:3c16d873-1097-4f56-913f-cc366ed34c23 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.819183873 +0000 UTC m=+917.546887963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert") pod "infra-operator-controller-manager-79d975b745-krt25" (UID: "3c16d873-1097-4f56-913f-cc366ed34c23") : secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.859759 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7"] Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.879329 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t"] Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.883608 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7487625_0c9e_4396_8eb8_5840ce4344c8.slice/crio-22611ac8f1f288ae6fc73e5b9d603d332e759835a75a6f142ebd627e2e63073d WatchSource:0}: Error finding container 22611ac8f1f288ae6fc73e5b9d603d332e759835a75a6f142ebd627e2e63073d: Status 404 returned error can't find the container with id 22611ac8f1f288ae6fc73e5b9d603d332e759835a75a6f142ebd627e2e63073d Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.897121 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrbmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-gwbk7_openstack-operators(a7487625-0c9e-4396-8eb8-5840ce4344c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.899029 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" podUID="a7487625-0c9e-4396-8eb8-5840ce4344c8" Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.900157 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c487619_568f_44a0_9d23_037794ada114.slice/crio-b5d20a2cd29055e06106754975fbfd6dee0dbeeb700bde338d90c8c037f97863 WatchSource:0}: Error finding container b5d20a2cd29055e06106754975fbfd6dee0dbeeb700bde338d90c8c037f97863: Status 404 returned error can't find the container with id b5d20a2cd29055e06106754975fbfd6dee0dbeeb700bde338d90c8c037f97863 Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.905467 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-xdrrr"] Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.908900 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.158:5001/openstack-k8s-operators/watcher-operator:21dfa39c3cddfadb564fb9c5cae6c76789d51664,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nc7k5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5689f5d7c4-95x8t_openstack-operators(4c487619-568f-44a0-9d23-037794ada114): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.909062 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66351682_3cdf_41cc_80d9_0bbb020144d2.slice/crio-6527504348a11d8b2de92ff5b0dd9f81a7f98185edf3b4e73fb6440ce36147f7 WatchSource:0}: Error finding container 6527504348a11d8b2de92ff5b0dd9f81a7f98185edf3b4e73fb6440ce36147f7: Status 404 returned error can't find the container with id 6527504348a11d8b2de92ff5b0dd9f81a7f98185edf3b4e73fb6440ce36147f7 Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.910034 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" podUID="4c487619-568f-44a0-9d23-037794ada114" Feb 18 11:51:53 crc kubenswrapper[4922]: W0218 11:51:53.910562 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod387afbf1_afa5_414c_a22a_83a6a8197ff7.slice/crio-32ddb711968631bbecf3506594ca3fb6079bd04b1c2852f65477b75e233773fe WatchSource:0}: Error finding container 32ddb711968631bbecf3506594ca3fb6079bd04b1c2852f65477b75e233773fe: Status 404 returned error can't find the container with id 32ddb711968631bbecf3506594ca3fb6079bd04b1c2852f65477b75e233773fe Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.916636 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr"] Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.918330 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zn6pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-btlqf_openstack-operators(387afbf1-afa5-414c-a22a-83a6a8197ff7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.919580 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" podUID="387afbf1-afa5-414c-a22a-83a6a8197ff7" Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.918395 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hczzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-hddmr_openstack-operators(66351682-3cdf-41cc-80d9-0bbb020144d2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 11:51:53 crc kubenswrapper[4922]: E0218 11:51:53.921474 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" podUID="66351682-3cdf-41cc-80d9-0bbb020144d2" Feb 18 11:51:53 crc kubenswrapper[4922]: I0218 11:51:53.929073 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf"] Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.007685 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv"] Feb 18 11:51:54 crc kubenswrapper[4922]: W0218 11:51:54.015903 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69ef021e_1b46_4aeb_8023_93f6fb366396.slice/crio-1ba3a63b189a222a4db8bd883f9aea17f842e39f62bb27b037ac0bada74fffba WatchSource:0}: Error finding container 1ba3a63b189a222a4db8bd883f9aea17f842e39f62bb27b037ac0bada74fffba: Status 404 returned error can't find the container with id 1ba3a63b189a222a4db8bd883f9aea17f842e39f62bb27b037ac0bada74fffba Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.020086 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hvrwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-98zrv_openstack-operators(69ef021e-1b46-4aeb-8023-93f6fb366396): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.021226 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" podUID="69ef021e-1b46-4aeb-8023-93f6fb366396" Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.224474 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.224608 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.225050 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert podName:081d9ec7-e338-437a-b3bc-af9b788db66a nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.225025769 +0000 UTC m=+917.952729849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" (UID: "081d9ec7-e338-437a-b3bc-af9b788db66a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.262959 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" event={"ID":"324031ff-ceae-4065-9955-fd5745647cb0","Type":"ContainerStarted","Data":"791eb39defb8137ec3bf31cfb665d8d3beb61451f0c1d645e315b166828a58a3"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.264989 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" event={"ID":"66351682-3cdf-41cc-80d9-0bbb020144d2","Type":"ContainerStarted","Data":"6527504348a11d8b2de92ff5b0dd9f81a7f98185edf3b4e73fb6440ce36147f7"} Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.266181 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" podUID="66351682-3cdf-41cc-80d9-0bbb020144d2" Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.274210 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" event={"ID":"0a8811b6-4023-427d-a893-628e0dd338e8","Type":"ContainerStarted","Data":"8558ba6625dea2210b7b67db1e11d44c62b1c4bce4a06664d839c01b9382c622"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.275213 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" event={"ID":"4c487619-568f-44a0-9d23-037794ada114","Type":"ContainerStarted","Data":"b5d20a2cd29055e06106754975fbfd6dee0dbeeb700bde338d90c8c037f97863"} Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.277404 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/openstack-k8s-operators/watcher-operator:21dfa39c3cddfadb564fb9c5cae6c76789d51664\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" podUID="4c487619-568f-44a0-9d23-037794ada114" Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.277782 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" event={"ID":"42271b89-6aba-4e15-a2a1-856b656a1b6e","Type":"ContainerStarted","Data":"bca8f660631f35a5242fb876dfc13783218eb26554e5bcd938da8dc505261074"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.289904 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" event={"ID":"183b09db-ca5a-4aa1-b87b-908de4dc44ff","Type":"ContainerStarted","Data":"f9b9316101309ac1a5566770b337b359f68832e29afb6db778bc8e5893f76954"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.297949 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" event={"ID":"8eae5053-64f3-401a-a151-dbf22f30a845","Type":"ContainerStarted","Data":"fbc581451c7775e4ed002eaee46cb078dba2b7394443f839c50f8f9d53eca2b0"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.303432 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" event={"ID":"90b4a58a-81d7-4129-8f45-5429e963676e","Type":"ContainerStarted","Data":"5cc223c3dbe87c0c0b49f820260318eabbe9d2c5885bfa1b38b08950af80fa5e"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.307563 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" event={"ID":"69ef021e-1b46-4aeb-8023-93f6fb366396","Type":"ContainerStarted","Data":"1ba3a63b189a222a4db8bd883f9aea17f842e39f62bb27b037ac0bada74fffba"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.310497 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" event={"ID":"387afbf1-afa5-414c-a22a-83a6a8197ff7","Type":"ContainerStarted","Data":"32ddb711968631bbecf3506594ca3fb6079bd04b1c2852f65477b75e233773fe"} Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.310987 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" podUID="69ef021e-1b46-4aeb-8023-93f6fb366396" Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.312437 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" podUID="387afbf1-afa5-414c-a22a-83a6a8197ff7" Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.313708 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" event={"ID":"7753280d-fc59-4887-9d87-a2cfd83e7ba9","Type":"ContainerStarted","Data":"0b141124e1520dbab2b2ea5158452a8e8b854d2cff468477b9a91b87418c6be6"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.317927 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" event={"ID":"52123256-1372-49b6-80ed-c3112d14a8fa","Type":"ContainerStarted","Data":"8bbea18a5b40a76c11c1625c7047a94a50daccbec8f3489452a3f764e74051a8"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.321346 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" event={"ID":"2936db6d-8a5b-4da8-9e52-e508a6e757fe","Type":"ContainerStarted","Data":"1fe78d73db97d66f5edadb46991fad71ebe88ff4c15640b936826b5262f74e49"} Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.331504 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" event={"ID":"a7487625-0c9e-4396-8eb8-5840ce4344c8","Type":"ContainerStarted","Data":"22611ac8f1f288ae6fc73e5b9d603d332e759835a75a6f142ebd627e2e63073d"} Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.333282 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" podUID="a7487625-0c9e-4396-8eb8-5840ce4344c8" Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.531073 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:54 crc kubenswrapper[4922]: I0218 11:51:54.531149 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.531231 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.531296 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.531309 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.531285024 +0000 UTC m=+918.258989094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "metrics-server-cert" not found Feb 18 11:51:54 crc kubenswrapper[4922]: E0218 11:51:54.531739 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.531684194 +0000 UTC m=+918.259388334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "webhook-server-cert" not found Feb 18 11:51:55 crc kubenswrapper[4922]: E0218 11:51:55.355552 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" podUID="a7487625-0c9e-4396-8eb8-5840ce4344c8" Feb 18 11:51:55 crc kubenswrapper[4922]: E0218 11:51:55.357469 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" podUID="69ef021e-1b46-4aeb-8023-93f6fb366396" Feb 18 11:51:55 crc kubenswrapper[4922]: E0218 11:51:55.359918 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" podUID="387afbf1-afa5-414c-a22a-83a6a8197ff7" Feb 18 11:51:55 crc kubenswrapper[4922]: E0218 11:51:55.360036 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/openstack-k8s-operators/watcher-operator:21dfa39c3cddfadb564fb9c5cae6c76789d51664\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" podUID="4c487619-568f-44a0-9d23-037794ada114" Feb 18 11:51:55 crc kubenswrapper[4922]: E0218 11:51:55.363754 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" podUID="66351682-3cdf-41cc-80d9-0bbb020144d2" Feb 18 11:51:55 crc kubenswrapper[4922]: I0218 11:51:55.851761 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:55 crc kubenswrapper[4922]: E0218 11:51:55.851916 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:55 crc kubenswrapper[4922]: E0218 11:51:55.851982 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert podName:3c16d873-1097-4f56-913f-cc366ed34c23 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:59.851964394 +0000 UTC m=+921.579668474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert") pod "infra-operator-controller-manager-79d975b745-krt25" (UID: "3c16d873-1097-4f56-913f-cc366ed34c23") : secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:56 crc kubenswrapper[4922]: I0218 11:51:56.265563 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:51:56 crc kubenswrapper[4922]: E0218 11:51:56.266029 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:51:56 crc kubenswrapper[4922]: E0218 11:51:56.266075 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert podName:081d9ec7-e338-437a-b3bc-af9b788db66a nodeName:}" failed. No retries permitted until 2026-02-18 11:52:00.266061928 +0000 UTC m=+921.993766008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" (UID: "081d9ec7-e338-437a-b3bc-af9b788db66a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:51:56 crc kubenswrapper[4922]: I0218 11:51:56.569847 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:56 crc kubenswrapper[4922]: I0218 11:51:56.569944 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:51:56 crc kubenswrapper[4922]: E0218 11:51:56.570000 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 11:51:56 crc kubenswrapper[4922]: E0218 11:51:56.570092 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:00.570068806 +0000 UTC m=+922.297772936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "metrics-server-cert" not found Feb 18 11:51:56 crc kubenswrapper[4922]: E0218 11:51:56.570150 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 11:51:56 crc kubenswrapper[4922]: E0218 11:51:56.570515 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:00.570474046 +0000 UTC m=+922.298178126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "webhook-server-cert" not found Feb 18 11:51:59 crc kubenswrapper[4922]: I0218 11:51:59.919570 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:51:59 crc kubenswrapper[4922]: E0218 11:51:59.919833 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 11:51:59 crc kubenswrapper[4922]: E0218 11:51:59.920066 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert podName:3c16d873-1097-4f56-913f-cc366ed34c23 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:07.920049761 +0000 UTC m=+929.647753831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert") pod "infra-operator-controller-manager-79d975b745-krt25" (UID: "3c16d873-1097-4f56-913f-cc366ed34c23") : secret "infra-operator-webhook-server-cert" not found Feb 18 11:52:00 crc kubenswrapper[4922]: I0218 11:52:00.326285 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:52:00 crc kubenswrapper[4922]: E0218 11:52:00.326518 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:52:00 crc kubenswrapper[4922]: E0218 11:52:00.326747 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert podName:081d9ec7-e338-437a-b3bc-af9b788db66a nodeName:}" failed. No retries permitted until 2026-02-18 11:52:08.326726088 +0000 UTC m=+930.054430168 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" (UID: "081d9ec7-e338-437a-b3bc-af9b788db66a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:52:00 crc kubenswrapper[4922]: I0218 11:52:00.629982 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:00 crc kubenswrapper[4922]: I0218 11:52:00.630073 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:00 crc kubenswrapper[4922]: E0218 11:52:00.630219 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 11:52:00 crc kubenswrapper[4922]: E0218 11:52:00.630284 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 11:52:00 crc kubenswrapper[4922]: E0218 11:52:00.630346 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:08.630318766 +0000 UTC m=+930.358022846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "metrics-server-cert" not found Feb 18 11:52:00 crc kubenswrapper[4922]: E0218 11:52:00.630411 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:08.630401138 +0000 UTC m=+930.358105218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "webhook-server-cert" not found Feb 18 11:52:05 crc kubenswrapper[4922]: E0218 11:52:05.823013 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 18 11:52:05 crc kubenswrapper[4922]: E0218 11:52:05.823717 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5fnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-2ncv8_openstack-operators(01766bee-50bd-4dcb-9b3d-831486ddeaf4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:52:05 crc kubenswrapper[4922]: E0218 11:52:05.825036 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" podUID="01766bee-50bd-4dcb-9b3d-831486ddeaf4" Feb 18 11:52:06 crc kubenswrapper[4922]: E0218 11:52:06.392200 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 18 11:52:06 crc kubenswrapper[4922]: E0218 11:52:06.392714 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzpq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-6z2cq_openstack-operators(61f73f1d-e472-411e-adc0-6755c47aa72b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:52:06 crc kubenswrapper[4922]: E0218 11:52:06.394310 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" podUID="61f73f1d-e472-411e-adc0-6755c47aa72b" Feb 18 11:52:06 crc kubenswrapper[4922]: E0218 11:52:06.431831 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" podUID="01766bee-50bd-4dcb-9b3d-831486ddeaf4" Feb 18 11:52:06 crc kubenswrapper[4922]: E0218 11:52:06.432065 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" podUID="61f73f1d-e472-411e-adc0-6755c47aa72b" Feb 18 11:52:06 crc kubenswrapper[4922]: E0218 11:52:06.950883 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 18 11:52:06 crc kubenswrapper[4922]: E0218 11:52:06.951081 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ghd7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-c597h_openstack-operators(2936db6d-8a5b-4da8-9e52-e508a6e757fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:52:06 crc kubenswrapper[4922]: E0218 11:52:06.952330 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" podUID="2936db6d-8a5b-4da8-9e52-e508a6e757fe" Feb 18 11:52:07 crc kubenswrapper[4922]: E0218 11:52:07.441908 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" podUID="2936db6d-8a5b-4da8-9e52-e508a6e757fe" Feb 18 11:52:07 crc kubenswrapper[4922]: E0218 11:52:07.530571 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 18 11:52:07 crc kubenswrapper[4922]: E0218 11:52:07.530885 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmqv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-z7pdl_openstack-operators(42271b89-6aba-4e15-a2a1-856b656a1b6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:52:07 crc kubenswrapper[4922]: E0218 11:52:07.532098 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" podUID="42271b89-6aba-4e15-a2a1-856b656a1b6e" Feb 18 11:52:07 crc kubenswrapper[4922]: I0218 11:52:07.943435 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:52:07 crc kubenswrapper[4922]: E0218 11:52:07.943603 4922 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 11:52:07 crc kubenswrapper[4922]: E0218 11:52:07.943650 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert podName:3c16d873-1097-4f56-913f-cc366ed34c23 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:23.943633375 +0000 UTC m=+945.671337455 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert") pod "infra-operator-controller-manager-79d975b745-krt25" (UID: "3c16d873-1097-4f56-913f-cc366ed34c23") : secret "infra-operator-webhook-server-cert" not found Feb 18 11:52:08 crc kubenswrapper[4922]: I0218 11:52:08.348700 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:52:08 crc kubenswrapper[4922]: E0218 11:52:08.348908 4922 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:52:08 crc kubenswrapper[4922]: E0218 11:52:08.349000 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert podName:081d9ec7-e338-437a-b3bc-af9b788db66a nodeName:}" failed. No retries permitted until 2026-02-18 11:52:24.348975439 +0000 UTC m=+946.076679539 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" (UID: "081d9ec7-e338-437a-b3bc-af9b788db66a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 11:52:08 crc kubenswrapper[4922]: E0218 11:52:08.447521 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" podUID="42271b89-6aba-4e15-a2a1-856b656a1b6e" Feb 18 11:52:08 crc kubenswrapper[4922]: I0218 11:52:08.652783 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:08 crc kubenswrapper[4922]: I0218 11:52:08.652937 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:08 crc kubenswrapper[4922]: E0218 11:52:08.653139 4922 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 11:52:08 crc kubenswrapper[4922]: E0218 11:52:08.653226 4922 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 11:52:08 crc kubenswrapper[4922]: E0218 11:52:08.653270 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:24.653244433 +0000 UTC m=+946.380948513 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "metrics-server-cert" not found Feb 18 11:52:08 crc kubenswrapper[4922]: E0218 11:52:08.653324 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs podName:d81b14bf-a056-4780-af1a-bf38babee5b3 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:24.653296964 +0000 UTC m=+946.381001244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs") pod "openstack-operator-controller-manager-8685d86d55-pbbl7" (UID: "d81b14bf-a056-4780-af1a-bf38babee5b3") : secret "webhook-server-cert" not found Feb 18 11:52:09 crc kubenswrapper[4922]: E0218 11:52:09.117808 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 18 11:52:09 crc kubenswrapper[4922]: E0218 11:52:09.118292 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ngz5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-4fm4m_openstack-operators(8eae5053-64f3-401a-a151-dbf22f30a845): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:52:09 crc kubenswrapper[4922]: E0218 11:52:09.119909 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" podUID="8eae5053-64f3-401a-a151-dbf22f30a845" Feb 18 11:52:09 crc kubenswrapper[4922]: E0218 11:52:09.455213 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" podUID="8eae5053-64f3-401a-a151-dbf22f30a845" Feb 18 11:52:09 crc kubenswrapper[4922]: E0218 11:52:09.696902 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 11:52:09 crc kubenswrapper[4922]: E0218 11:52:09.697068 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l57qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-wrd8w_openstack-operators(90b4a58a-81d7-4129-8f45-5429e963676e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:52:09 crc kubenswrapper[4922]: E0218 11:52:09.698388 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" podUID="90b4a58a-81d7-4129-8f45-5429e963676e" Feb 18 11:52:09 crc kubenswrapper[4922]: I0218 11:52:09.974777 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 11:52:10 crc kubenswrapper[4922]: E0218 11:52:10.233033 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 11:52:10 crc kubenswrapper[4922]: E0218 11:52:10.233215 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfvwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-jtfzr_openstack-operators(324031ff-ceae-4065-9955-fd5745647cb0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:52:10 crc kubenswrapper[4922]: E0218 11:52:10.234529 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" podUID="324031ff-ceae-4065-9955-fd5745647cb0" Feb 18 11:52:10 crc kubenswrapper[4922]: E0218 11:52:10.462601 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" podUID="324031ff-ceae-4065-9955-fd5745647cb0" Feb 18 11:52:10 crc kubenswrapper[4922]: E0218 11:52:10.462741 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" podUID="90b4a58a-81d7-4129-8f45-5429e963676e" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.509092 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" event={"ID":"ae81863a-2778-4505-9106-c850f873a75d","Type":"ContainerStarted","Data":"82ad5d281eb09ba4cafc040c8576b6d48cf0446118f10f3120bba11bc65711c8"} Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.509421 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.511402 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" event={"ID":"66351682-3cdf-41cc-80d9-0bbb020144d2","Type":"ContainerStarted","Data":"51e441e8bb0f01482649fae743fac2790e5190f0aa5133eeee078827ccb30a64"} Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.511596 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.512815 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" event={"ID":"0a8811b6-4023-427d-a893-628e0dd338e8","Type":"ContainerStarted","Data":"3820e59525b5adc62b77ed733a0295e398e1ed515fb19969b5a88c399217a3ce"} Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.513812 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.520125 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" event={"ID":"0032092e-84ca-426d-8f15-5141f4a8da20","Type":"ContainerStarted","Data":"94ea8c9fdab71f7b4b88df97f12706a635037fa70f83abf40da86ab871608a4f"} Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.520806 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.529282 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" event={"ID":"52123256-1372-49b6-80ed-c3112d14a8fa","Type":"ContainerStarted","Data":"047496db1edfbd8b01295f5532ec0ff045c27cd97d716a46e88784309b6676cf"} Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.529714 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.541831 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" podStartSLOduration=4.577771667 podStartE2EDuration="23.541809546s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:52.866095803 +0000 UTC m=+914.593799883" lastFinishedPulling="2026-02-18 11:52:11.830133682 +0000 UTC m=+933.557837762" observedRunningTime="2026-02-18 11:52:14.532920532 +0000 UTC m=+936.260624622" watchObservedRunningTime="2026-02-18 11:52:14.541809546 +0000 UTC m=+936.269513636" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.570158 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" podStartSLOduration=4.625805312 podStartE2EDuration="22.570138021s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.885536896 +0000 UTC m=+915.613240986" lastFinishedPulling="2026-02-18 11:52:11.829869615 +0000 UTC m=+933.557573695" observedRunningTime="2026-02-18 11:52:14.560646781 +0000 UTC m=+936.288350861" watchObservedRunningTime="2026-02-18 11:52:14.570138021 +0000 UTC m=+936.297842101" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.584579 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" podStartSLOduration=4.378875351 podStartE2EDuration="23.584563285s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.164554681 +0000 UTC m=+914.892258761" lastFinishedPulling="2026-02-18 11:52:12.370242615 +0000 UTC m=+934.097946695" observedRunningTime="2026-02-18 11:52:14.582094452 +0000 UTC m=+936.309798542" watchObservedRunningTime="2026-02-18 11:52:14.584563285 +0000 UTC m=+936.312267355" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.613736 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" podStartSLOduration=5.739083378 podStartE2EDuration="23.613698039s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.363983671 +0000 UTC m=+915.091687751" lastFinishedPulling="2026-02-18 11:52:11.238598332 +0000 UTC m=+932.966302412" observedRunningTime="2026-02-18 11:52:14.610456438 +0000 UTC m=+936.338160518" watchObservedRunningTime="2026-02-18 11:52:14.613698039 +0000 UTC m=+936.341402119" Feb 18 11:52:14 crc kubenswrapper[4922]: I0218 11:52:14.635582 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" podStartSLOduration=2.486291949 podStartE2EDuration="22.635562291s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.917974984 +0000 UTC m=+915.645679064" lastFinishedPulling="2026-02-18 11:52:14.067245326 +0000 UTC m=+935.794949406" observedRunningTime="2026-02-18 11:52:14.628494503 +0000 UTC m=+936.356198583" watchObservedRunningTime="2026-02-18 11:52:14.635562291 +0000 UTC m=+936.363266371" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.553644 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" event={"ID":"69ef021e-1b46-4aeb-8023-93f6fb366396","Type":"ContainerStarted","Data":"7d70293cb5ba2e1388847139393d5104dcfe893ef9454d4c2fdc3b93b52db0b1"} Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.555868 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" event={"ID":"51cd14ee-9b8a-421f-80bb-d208b752079d","Type":"ContainerStarted","Data":"45a6fcea78733aa6e7b3b497f092daf8022f0dfd64f7f32bcee6bfe8fe12a7be"} Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.556023 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.557509 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" event={"ID":"387afbf1-afa5-414c-a22a-83a6a8197ff7","Type":"ContainerStarted","Data":"180fdbde122f8b351f0c2d8ece628061b11a164646fd707a997de9002f4deb40"} Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.557701 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.559417 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" event={"ID":"7753280d-fc59-4887-9d87-a2cfd83e7ba9","Type":"ContainerStarted","Data":"15296e2d46dd291afb2887d2b6f31dd860a198fa3e67d29211f44e3a7fd4d95b"} Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.559583 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.564931 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" event={"ID":"4c9af0bf-50d7-42ef-a8df-241b5ec63f5a","Type":"ContainerStarted","Data":"1cf1b27cd102d4af62bc817ebf9c9dc41c91fbf75803b518d18a080ee20b50cc"} Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.565138 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.567667 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" event={"ID":"a7487625-0c9e-4396-8eb8-5840ce4344c8","Type":"ContainerStarted","Data":"37c4e74c297481c523e5c0083a06a074082fbbd2d201360965ad39fe666f1fee"} Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.567948 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.569764 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" event={"ID":"4c487619-568f-44a0-9d23-037794ada114","Type":"ContainerStarted","Data":"2e53e09cc17728c9f81e08eeb6af31e0317a0fb97b3090a1e4af610afef4d55f"} Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.569987 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.571778 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" event={"ID":"183b09db-ca5a-4aa1-b87b-908de4dc44ff","Type":"ContainerStarted","Data":"d6ae254be8e60d3b9f6e2d4f4171811a94d90e720d17a673689ec004a0e4548b"} Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.580418 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-98zrv" podStartSLOduration=3.533474971 podStartE2EDuration="23.580384952s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="2026-02-18 11:51:54.019912085 +0000 UTC m=+915.747616165" lastFinishedPulling="2026-02-18 11:52:14.066822066 +0000 UTC m=+935.794526146" observedRunningTime="2026-02-18 11:52:15.576677518 +0000 UTC m=+937.304381598" watchObservedRunningTime="2026-02-18 11:52:15.580384952 +0000 UTC m=+937.308089032" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.608933 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" podStartSLOduration=5.942251843 podStartE2EDuration="24.608913581s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.165175127 +0000 UTC m=+914.892879207" lastFinishedPulling="2026-02-18 11:52:11.831836865 +0000 UTC m=+933.559540945" observedRunningTime="2026-02-18 11:52:15.606780287 +0000 UTC m=+937.334484377" watchObservedRunningTime="2026-02-18 11:52:15.608913581 +0000 UTC m=+937.336617681" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.623972 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" podStartSLOduration=4.323505826 podStartE2EDuration="24.623955581s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.896934064 +0000 UTC m=+915.624638144" lastFinishedPulling="2026-02-18 11:52:14.197383819 +0000 UTC m=+935.925087899" observedRunningTime="2026-02-18 11:52:15.623572291 +0000 UTC m=+937.351276371" watchObservedRunningTime="2026-02-18 11:52:15.623955581 +0000 UTC m=+937.351659661" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.642461 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" podStartSLOduration=3.372358058 podStartE2EDuration="23.642442987s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.917970684 +0000 UTC m=+915.645674764" lastFinishedPulling="2026-02-18 11:52:14.188055613 +0000 UTC m=+935.915759693" observedRunningTime="2026-02-18 11:52:15.638454326 +0000 UTC m=+937.366158406" watchObservedRunningTime="2026-02-18 11:52:15.642442987 +0000 UTC m=+937.370147067" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.675702 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" podStartSLOduration=3.4060110359999998 podStartE2EDuration="23.675683495s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.908740741 +0000 UTC m=+915.636444821" lastFinishedPulling="2026-02-18 11:52:14.17841319 +0000 UTC m=+935.906117280" observedRunningTime="2026-02-18 11:52:15.674513616 +0000 UTC m=+937.402217716" watchObservedRunningTime="2026-02-18 11:52:15.675683495 +0000 UTC m=+937.403387575" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.678421 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" podStartSLOduration=6.605500822 podStartE2EDuration="24.678411024s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.165176827 +0000 UTC m=+914.892880907" lastFinishedPulling="2026-02-18 11:52:11.238087029 +0000 UTC m=+932.965791109" observedRunningTime="2026-02-18 11:52:15.654526322 +0000 UTC m=+937.382230412" watchObservedRunningTime="2026-02-18 11:52:15.678411024 +0000 UTC m=+937.406115104" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.697701 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" podStartSLOduration=6.81592842 podStartE2EDuration="24.69767176s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.357263422 +0000 UTC m=+915.084967502" lastFinishedPulling="2026-02-18 11:52:11.239006772 +0000 UTC m=+932.966710842" observedRunningTime="2026-02-18 11:52:15.694074479 +0000 UTC m=+937.421778559" watchObservedRunningTime="2026-02-18 11:52:15.69767176 +0000 UTC m=+937.425375840" Feb 18 11:52:15 crc kubenswrapper[4922]: I0218 11:52:15.722751 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" podStartSLOduration=6.229774498 podStartE2EDuration="23.722704731s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.745169346 +0000 UTC m=+915.472873426" lastFinishedPulling="2026-02-18 11:52:11.238099579 +0000 UTC m=+932.965803659" observedRunningTime="2026-02-18 11:52:15.718018403 +0000 UTC m=+937.445722503" watchObservedRunningTime="2026-02-18 11:52:15.722704731 +0000 UTC m=+937.450408811" Feb 18 11:52:16 crc kubenswrapper[4922]: I0218 11:52:16.581394 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" Feb 18 11:52:17 crc kubenswrapper[4922]: I0218 11:52:17.588545 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" event={"ID":"01766bee-50bd-4dcb-9b3d-831486ddeaf4","Type":"ContainerStarted","Data":"127bc88f95726b17d67e1ed81455b0bf6fec58eade654362da10a6251df24455"} Feb 18 11:52:17 crc kubenswrapper[4922]: I0218 11:52:17.589079 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" Feb 18 11:52:17 crc kubenswrapper[4922]: I0218 11:52:17.613565 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" podStartSLOduration=2.251615018 podStartE2EDuration="26.613279366s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.037201439 +0000 UTC m=+914.764905519" lastFinishedPulling="2026-02-18 11:52:17.398865777 +0000 UTC m=+939.126569867" observedRunningTime="2026-02-18 11:52:17.604922035 +0000 UTC m=+939.332626115" watchObservedRunningTime="2026-02-18 11:52:17.613279366 +0000 UTC m=+939.340983446" Feb 18 11:52:18 crc kubenswrapper[4922]: I0218 11:52:18.598240 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" event={"ID":"61f73f1d-e472-411e-adc0-6755c47aa72b","Type":"ContainerStarted","Data":"b1c8a7f969057e190de16d7fd5bf9a3f548ada5fb9a4c7786e1771f7aa1d433f"} Feb 18 11:52:18 crc kubenswrapper[4922]: I0218 11:52:18.598482 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" Feb 18 11:52:18 crc kubenswrapper[4922]: I0218 11:52:18.618597 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" podStartSLOduration=2.2675542 podStartE2EDuration="27.618572762s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.000388171 +0000 UTC m=+914.728092261" lastFinishedPulling="2026-02-18 11:52:18.351406743 +0000 UTC m=+940.079110823" observedRunningTime="2026-02-18 11:52:18.613902384 +0000 UTC m=+940.341606464" watchObservedRunningTime="2026-02-18 11:52:18.618572762 +0000 UTC m=+940.346276852" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.092768 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-f8lbk" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.161803 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2ncv8" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.246833 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-bnvrn" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.260623 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qm24h" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.308564 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-82hvr" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.463634 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-r4v59" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.575878 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-tn47v" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.628218 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" event={"ID":"90b4a58a-81d7-4129-8f45-5429e963676e","Type":"ContainerStarted","Data":"85cd2e4c3af3c7ceda3755929bd0aea9fabbf156d02b991371e49b388dfcb094"} Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.628398 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.645613 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" podStartSLOduration=3.761794587 podStartE2EDuration="31.645598713s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.614479199 +0000 UTC m=+915.342183279" lastFinishedPulling="2026-02-18 11:52:21.498283305 +0000 UTC m=+943.225987405" observedRunningTime="2026-02-18 11:52:22.642811413 +0000 UTC m=+944.370515493" watchObservedRunningTime="2026-02-18 11:52:22.645598713 +0000 UTC m=+944.373302793" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.670898 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-gwbk7" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.725691 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hddmr" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.799854 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-2bk9r" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.875787 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-btlqf" Feb 18 11:52:22 crc kubenswrapper[4922]: I0218 11:52:22.948720 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-xdrrr" Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.005178 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5689f5d7c4-95x8t" Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.636420 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" event={"ID":"2936db6d-8a5b-4da8-9e52-e508a6e757fe","Type":"ContainerStarted","Data":"a9ad118fadf52a9e0910d8da6266e65f45d417d6f03b25cf3fed755ed810c95b"} Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.636935 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.637973 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" event={"ID":"42271b89-6aba-4e15-a2a1-856b656a1b6e","Type":"ContainerStarted","Data":"d33e5739ebe03a6b08a8dc4ff9a6c1e0363333bee406aaa0c17e49624cf90bcb"} Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.638131 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.639570 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" event={"ID":"8eae5053-64f3-401a-a151-dbf22f30a845","Type":"ContainerStarted","Data":"d98a9cb5bd91905d3ad32584c3e5e725d67b75989ab4ec89b830bf3ff9718b92"} Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.639798 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.660677 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" podStartSLOduration=3.6670884 podStartE2EDuration="32.660658816s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.614462099 +0000 UTC m=+915.342166169" lastFinishedPulling="2026-02-18 11:52:22.608032505 +0000 UTC m=+944.335736585" observedRunningTime="2026-02-18 11:52:23.653344541 +0000 UTC m=+945.381048621" watchObservedRunningTime="2026-02-18 11:52:23.660658816 +0000 UTC m=+945.388362896" Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.668247 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" podStartSLOduration=3.826105469 podStartE2EDuration="32.668224666s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.733298996 +0000 UTC m=+915.461003076" lastFinishedPulling="2026-02-18 11:52:22.575418193 +0000 UTC m=+944.303122273" observedRunningTime="2026-02-18 11:52:23.667247572 +0000 UTC m=+945.394951662" watchObservedRunningTime="2026-02-18 11:52:23.668224666 +0000 UTC m=+945.395928756" Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.693140 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" podStartSLOduration=2.889316914 podStartE2EDuration="31.693122604s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.773463559 +0000 UTC m=+915.501167649" lastFinishedPulling="2026-02-18 11:52:22.577269259 +0000 UTC m=+944.304973339" observedRunningTime="2026-02-18 11:52:23.686265431 +0000 UTC m=+945.413969501" watchObservedRunningTime="2026-02-18 11:52:23.693122604 +0000 UTC m=+945.420826694" Feb 18 11:52:23 crc kubenswrapper[4922]: I0218 11:52:23.996595 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.003140 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c16d873-1097-4f56-913f-cc366ed34c23-cert\") pod \"infra-operator-controller-manager-79d975b745-krt25\" (UID: \"3c16d873-1097-4f56-913f-cc366ed34c23\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.135438 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.402107 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.408832 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/081d9ec7-e338-437a-b3bc-af9b788db66a-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7\" (UID: \"081d9ec7-e338-437a-b3bc-af9b788db66a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.532089 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.665373 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-krt25"] Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.707644 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.707837 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.713863 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-metrics-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.714337 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d81b14bf-a056-4780-af1a-bf38babee5b3-webhook-certs\") pod \"openstack-operator-controller-manager-8685d86d55-pbbl7\" (UID: \"d81b14bf-a056-4780-af1a-bf38babee5b3\") " pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.761605 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7"] Feb 18 11:52:24 crc kubenswrapper[4922]: W0218 11:52:24.764277 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081d9ec7_e338_437a_b3bc_af9b788db66a.slice/crio-0513720a13f6309d20acc61d826343eb53fc8d747c3f4a08928e47331fe3a2e2 WatchSource:0}: Error finding container 0513720a13f6309d20acc61d826343eb53fc8d747c3f4a08928e47331fe3a2e2: Status 404 returned error can't find the container with id 0513720a13f6309d20acc61d826343eb53fc8d747c3f4a08928e47331fe3a2e2 Feb 18 11:52:24 crc kubenswrapper[4922]: I0218 11:52:24.875129 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.107784 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7"] Feb 18 11:52:25 crc kubenswrapper[4922]: W0218 11:52:25.116232 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd81b14bf_a056_4780_af1a_bf38babee5b3.slice/crio-a7fdb6783296e00b83986aaae921a03105789cc0a2d757b7d0ac926a70148f5e WatchSource:0}: Error finding container a7fdb6783296e00b83986aaae921a03105789cc0a2d757b7d0ac926a70148f5e: Status 404 returned error can't find the container with id a7fdb6783296e00b83986aaae921a03105789cc0a2d757b7d0ac926a70148f5e Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.661690 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" event={"ID":"081d9ec7-e338-437a-b3bc-af9b788db66a","Type":"ContainerStarted","Data":"0513720a13f6309d20acc61d826343eb53fc8d747c3f4a08928e47331fe3a2e2"} Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.664563 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" event={"ID":"3c16d873-1097-4f56-913f-cc366ed34c23","Type":"ContainerStarted","Data":"229de988738c71267208a4416ebaff65f4884c952d0e200b0493557c02a22b27"} Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.667222 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" event={"ID":"324031ff-ceae-4065-9955-fd5745647cb0","Type":"ContainerStarted","Data":"c245b240c2f559aa64973e5251f50f071a0fb9f06bd3e9e15d5557b81424e4aa"} Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.667438 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.669247 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" event={"ID":"d81b14bf-a056-4780-af1a-bf38babee5b3","Type":"ContainerStarted","Data":"d147eb3f6c88484ca7ad3e2c09d3f57880738a52e8fa5290ce12b88df3cab757"} Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.669275 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" event={"ID":"d81b14bf-a056-4780-af1a-bf38babee5b3","Type":"ContainerStarted","Data":"a7fdb6783296e00b83986aaae921a03105789cc0a2d757b7d0ac926a70148f5e"} Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.669930 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.698227 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" podStartSLOduration=2.906799023 podStartE2EDuration="34.698202837s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:51:53.580451761 +0000 UTC m=+915.308155841" lastFinishedPulling="2026-02-18 11:52:25.371855575 +0000 UTC m=+947.099559655" observedRunningTime="2026-02-18 11:52:25.694085653 +0000 UTC m=+947.421789733" watchObservedRunningTime="2026-02-18 11:52:25.698202837 +0000 UTC m=+947.425906917" Feb 18 11:52:25 crc kubenswrapper[4922]: I0218 11:52:25.730844 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" podStartSLOduration=33.730817849 podStartE2EDuration="33.730817849s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:25.725187627 +0000 UTC m=+947.452891707" watchObservedRunningTime="2026-02-18 11:52:25.730817849 +0000 UTC m=+947.458521939" Feb 18 11:52:27 crc kubenswrapper[4922]: I0218 11:52:27.691588 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" event={"ID":"081d9ec7-e338-437a-b3bc-af9b788db66a","Type":"ContainerStarted","Data":"0cc47605d08777efd6a5a6e67438d19cb2b880d65062ef7cd760b4327724ae0e"} Feb 18 11:52:27 crc kubenswrapper[4922]: I0218 11:52:27.692058 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:52:27 crc kubenswrapper[4922]: I0218 11:52:27.693395 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" event={"ID":"3c16d873-1097-4f56-913f-cc366ed34c23","Type":"ContainerStarted","Data":"cd55a5bdfcfaf39d2c010237362cd86c422ac15a198c01ef3e476a658a677415"} Feb 18 11:52:27 crc kubenswrapper[4922]: I0218 11:52:27.693498 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:52:27 crc kubenswrapper[4922]: I0218 11:52:27.721145 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" podStartSLOduration=33.172203579 podStartE2EDuration="35.72112692s" podCreationTimestamp="2026-02-18 11:51:52 +0000 UTC" firstStartedPulling="2026-02-18 11:52:24.765732037 +0000 UTC m=+946.493436107" lastFinishedPulling="2026-02-18 11:52:27.314655358 +0000 UTC m=+949.042359448" observedRunningTime="2026-02-18 11:52:27.712831071 +0000 UTC m=+949.440535151" watchObservedRunningTime="2026-02-18 11:52:27.72112692 +0000 UTC m=+949.448831000" Feb 18 11:52:27 crc kubenswrapper[4922]: I0218 11:52:27.737013 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" podStartSLOduration=34.121607013 podStartE2EDuration="36.736999491s" podCreationTimestamp="2026-02-18 11:51:51 +0000 UTC" firstStartedPulling="2026-02-18 11:52:24.68259616 +0000 UTC m=+946.410300240" lastFinishedPulling="2026-02-18 11:52:27.297988638 +0000 UTC m=+949.025692718" observedRunningTime="2026-02-18 11:52:27.733439201 +0000 UTC m=+949.461143281" watchObservedRunningTime="2026-02-18 11:52:27.736999491 +0000 UTC m=+949.464703571" Feb 18 11:52:32 crc kubenswrapper[4922]: I0218 11:52:32.144235 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-6z2cq" Feb 18 11:52:32 crc kubenswrapper[4922]: I0218 11:52:32.556202 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jtfzr" Feb 18 11:52:32 crc kubenswrapper[4922]: I0218 11:52:32.587151 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-c597h" Feb 18 11:52:32 crc kubenswrapper[4922]: I0218 11:52:32.600389 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-wrd8w" Feb 18 11:52:32 crc kubenswrapper[4922]: I0218 11:52:32.698467 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4fm4m" Feb 18 11:52:32 crc kubenswrapper[4922]: I0218 11:52:32.698769 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-z7pdl" Feb 18 11:52:34 crc kubenswrapper[4922]: I0218 11:52:34.141806 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-krt25" Feb 18 11:52:34 crc kubenswrapper[4922]: I0218 11:52:34.538638 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7" Feb 18 11:52:34 crc kubenswrapper[4922]: I0218 11:52:34.881233 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8685d86d55-pbbl7" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.182287 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d5kxc"] Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.184019 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.185878 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rnrrw" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.186173 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.186972 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.187202 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.205457 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d5kxc"] Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.235304 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r4dpg"] Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.238904 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.241003 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.246545 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r4dpg"] Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.252530 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9skmt\" (UniqueName: \"kubernetes.io/projected/277ea358-73ba-466c-bad6-c788f49749a2-kube-api-access-9skmt\") pod \"dnsmasq-dns-675f4bcbfc-d5kxc\" (UID: \"277ea358-73ba-466c-bad6-c788f49749a2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.252582 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277ea358-73ba-466c-bad6-c788f49749a2-config\") pod \"dnsmasq-dns-675f4bcbfc-d5kxc\" (UID: \"277ea358-73ba-466c-bad6-c788f49749a2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.353458 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flx4\" (UniqueName: \"kubernetes.io/projected/12309c77-cb46-401f-a335-08ca9d74e019-kube-api-access-4flx4\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.353534 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.353574 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9skmt\" (UniqueName: \"kubernetes.io/projected/277ea358-73ba-466c-bad6-c788f49749a2-kube-api-access-9skmt\") pod \"dnsmasq-dns-675f4bcbfc-d5kxc\" (UID: \"277ea358-73ba-466c-bad6-c788f49749a2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.353607 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-config\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.353637 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277ea358-73ba-466c-bad6-c788f49749a2-config\") pod \"dnsmasq-dns-675f4bcbfc-d5kxc\" (UID: \"277ea358-73ba-466c-bad6-c788f49749a2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.354461 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277ea358-73ba-466c-bad6-c788f49749a2-config\") pod \"dnsmasq-dns-675f4bcbfc-d5kxc\" (UID: \"277ea358-73ba-466c-bad6-c788f49749a2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.380493 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9skmt\" (UniqueName: \"kubernetes.io/projected/277ea358-73ba-466c-bad6-c788f49749a2-kube-api-access-9skmt\") pod \"dnsmasq-dns-675f4bcbfc-d5kxc\" (UID: \"277ea358-73ba-466c-bad6-c788f49749a2\") " pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.454613 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4flx4\" (UniqueName: \"kubernetes.io/projected/12309c77-cb46-401f-a335-08ca9d74e019-kube-api-access-4flx4\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.454698 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.454735 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-config\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.455832 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.455849 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-config\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.483419 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4flx4\" (UniqueName: \"kubernetes.io/projected/12309c77-cb46-401f-a335-08ca9d74e019-kube-api-access-4flx4\") pod \"dnsmasq-dns-78dd6ddcc-r4dpg\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.506972 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.562664 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.807716 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r4dpg"] Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.894647 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" event={"ID":"12309c77-cb46-401f-a335-08ca9d74e019","Type":"ContainerStarted","Data":"b35ffd4661bbe49f0defaf1ce8038237c6db0e474221315740b32c40e8be885c"} Feb 18 11:52:54 crc kubenswrapper[4922]: I0218 11:52:54.930189 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d5kxc"] Feb 18 11:52:54 crc kubenswrapper[4922]: W0218 11:52:54.933575 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod277ea358_73ba_466c_bad6_c788f49749a2.slice/crio-be1e6e7fb4233f0222ce6fa46078e0d2c0447caff506e59746208cd3058261a5 WatchSource:0}: Error finding container be1e6e7fb4233f0222ce6fa46078e0d2c0447caff506e59746208cd3058261a5: Status 404 returned error can't find the container with id be1e6e7fb4233f0222ce6fa46078e0d2c0447caff506e59746208cd3058261a5 Feb 18 11:52:55 crc kubenswrapper[4922]: I0218 11:52:55.903871 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" event={"ID":"277ea358-73ba-466c-bad6-c788f49749a2","Type":"ContainerStarted","Data":"be1e6e7fb4233f0222ce6fa46078e0d2c0447caff506e59746208cd3058261a5"} Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.023481 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d5kxc"] Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.058327 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rkhdz"] Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.059942 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.066163 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rkhdz"] Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.101656 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm4rb\" (UniqueName: \"kubernetes.io/projected/5728000b-35c8-4748-a8bd-722a9d4da288-kube-api-access-zm4rb\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.101775 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-config\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.102102 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.203904 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.203972 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm4rb\" (UniqueName: \"kubernetes.io/projected/5728000b-35c8-4748-a8bd-722a9d4da288-kube-api-access-zm4rb\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.203995 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-config\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.204977 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-config\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.205074 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.232483 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm4rb\" (UniqueName: \"kubernetes.io/projected/5728000b-35c8-4748-a8bd-722a9d4da288-kube-api-access-zm4rb\") pod \"dnsmasq-dns-666b6646f7-rkhdz\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.314639 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r4dpg"] Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.349799 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcsm9"] Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.352018 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.363877 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcsm9"] Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.389684 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.422114 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.422180 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-config\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.422222 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kzn\" (UniqueName: \"kubernetes.io/projected/a9adaf2c-91b1-42f9-8d96-307a08030cce-kube-api-access-n7kzn\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.523841 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.523926 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-config\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.523987 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kzn\" (UniqueName: \"kubernetes.io/projected/a9adaf2c-91b1-42f9-8d96-307a08030cce-kube-api-access-n7kzn\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.524926 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.524939 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-config\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.543202 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kzn\" (UniqueName: \"kubernetes.io/projected/a9adaf2c-91b1-42f9-8d96-307a08030cce-kube-api-access-n7kzn\") pod \"dnsmasq-dns-57d769cc4f-gcsm9\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.684734 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:52:57 crc kubenswrapper[4922]: I0218 11:52:57.897145 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rkhdz"] Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.185834 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.187291 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.203019 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.203377 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ctw5n" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.204215 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.204352 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.204569 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.204701 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.204829 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.217072 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.335729 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.335836 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gvvw\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-kube-api-access-9gvvw\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.335874 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.335889 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.335928 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.335953 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cef557d2-b935-4cf6-98f1-d3c2251c0e38-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.335975 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-config-data\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.335990 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cef557d2-b935-4cf6-98f1-d3c2251c0e38-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.336016 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.336032 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.336058 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440245 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440304 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gvvw\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-kube-api-access-9gvvw\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440336 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440353 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440400 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440428 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cef557d2-b935-4cf6-98f1-d3c2251c0e38-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440445 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-config-data\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440462 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cef557d2-b935-4cf6-98f1-d3c2251c0e38-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440483 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440501 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.440529 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.445168 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.445551 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.445603 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.445975 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.449388 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.451437 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cef557d2-b935-4cf6-98f1-d3c2251c0e38-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.461040 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-config-data\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.463419 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.463737 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.468600 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gvvw\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-kube-api-access-9gvvw\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.471925 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cef557d2-b935-4cf6-98f1-d3c2251c0e38-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.492700 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.493676 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.493827 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.495895 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.497294 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.497755 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.498026 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.498215 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.501072 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.501342 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8fwmc" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.506612 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.524074 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.543791 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12b84523-522e-4e8c-b78e-0094262fb1f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.543880 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.543938 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.543967 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.544028 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.544054 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.544093 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9xcj\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-kube-api-access-d9xcj\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.544127 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.544157 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12b84523-522e-4e8c-b78e-0094262fb1f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.544207 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.544228 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.645458 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.645786 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12b84523-522e-4e8c-b78e-0094262fb1f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.645829 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.645845 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.645880 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12b84523-522e-4e8c-b78e-0094262fb1f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.645918 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.645956 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.645982 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.646022 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.646033 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.646049 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.646084 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9xcj\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-kube-api-access-d9xcj\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.646333 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.646995 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.647023 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.647131 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.647550 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.651009 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.651062 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12b84523-522e-4e8c-b78e-0094262fb1f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.656268 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.662109 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9xcj\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-kube-api-access-d9xcj\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.664473 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.672874 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12b84523-522e-4e8c-b78e-0094262fb1f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:58 crc kubenswrapper[4922]: I0218 11:52:58.870300 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.695127 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.698752 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.703684 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.703774 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2zdpl" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.703828 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.704247 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.712080 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.716053 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.760910 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.760965 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.760987 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-config-data-default\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.761016 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.761035 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.761065 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.761085 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knsj\" (UniqueName: \"kubernetes.io/projected/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-kube-api-access-9knsj\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.761103 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-kolla-config\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.867769 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.867836 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.867862 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-config-data-default\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.867896 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.867920 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.867976 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.868004 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9knsj\" (UniqueName: \"kubernetes.io/projected/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-kube-api-access-9knsj\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.868027 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-kolla-config\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.868587 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.869211 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.876721 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.877215 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.889508 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.895312 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9knsj\" (UniqueName: \"kubernetes.io/projected/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-kube-api-access-9knsj\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.895825 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-kolla-config\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.896507 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-config-data-default\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:52:59 crc kubenswrapper[4922]: I0218 11:52:59.897133 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/302e3b56-c5a4-4e80-bb7e-a9e6a61a119e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e\") " pod="openstack/openstack-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.020561 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.922121 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.924031 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.926600 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.927647 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.927905 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-bjv2t" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.929641 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.932558 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.982962 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqxd\" (UniqueName: \"kubernetes.io/projected/873b23d0-3c83-4ab7-8178-1c4832c544a0-kube-api-access-9tqxd\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.983026 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.983073 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/873b23d0-3c83-4ab7-8178-1c4832c544a0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.983098 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/873b23d0-3c83-4ab7-8178-1c4832c544a0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.983138 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.983173 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873b23d0-3c83-4ab7-8178-1c4832c544a0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.983204 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:00 crc kubenswrapper[4922]: I0218 11:53:00.983222 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.084491 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873b23d0-3c83-4ab7-8178-1c4832c544a0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.084557 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.084584 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.084689 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.085337 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.085528 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.086160 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tqxd\" (UniqueName: \"kubernetes.io/projected/873b23d0-3c83-4ab7-8178-1c4832c544a0-kube-api-access-9tqxd\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.086527 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/873b23d0-3c83-4ab7-8178-1c4832c544a0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.086570 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/873b23d0-3c83-4ab7-8178-1c4832c544a0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.086624 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.087159 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.087951 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/873b23d0-3c83-4ab7-8178-1c4832c544a0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.088630 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873b23d0-3c83-4ab7-8178-1c4832c544a0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.091971 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873b23d0-3c83-4ab7-8178-1c4832c544a0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.099241 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/873b23d0-3c83-4ab7-8178-1c4832c544a0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.116301 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tqxd\" (UniqueName: \"kubernetes.io/projected/873b23d0-3c83-4ab7-8178-1c4832c544a0-kube-api-access-9tqxd\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.118755 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"873b23d0-3c83-4ab7-8178-1c4832c544a0\") " pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.259173 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.309145 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.312107 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.315858 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.316116 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.318122 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-f8bcd" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.327284 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.392515 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-config-data\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.392576 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-kolla-config\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.392598 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.392615 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.392634 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm44r\" (UniqueName: \"kubernetes.io/projected/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-kube-api-access-zm44r\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.495312 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-config-data\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.495435 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-kolla-config\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.495469 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.495499 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.495525 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm44r\" (UniqueName: \"kubernetes.io/projected/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-kube-api-access-zm44r\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.496344 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-kolla-config\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.496442 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-config-data\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.499578 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.507968 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.516356 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm44r\" (UniqueName: \"kubernetes.io/projected/0ce20f52-4b9d-47a6-8da7-c64cd1d15623-kube-api-access-zm44r\") pod \"memcached-0\" (UID: \"0ce20f52-4b9d-47a6-8da7-c64cd1d15623\") " pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.647198 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 11:53:01 crc kubenswrapper[4922]: I0218 11:53:01.949766 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" event={"ID":"5728000b-35c8-4748-a8bd-722a9d4da288","Type":"ContainerStarted","Data":"05332c9ac22e2e275a6ff8155167585702fc418b67e75c5ce73b68418331a924"} Feb 18 11:53:03 crc kubenswrapper[4922]: I0218 11:53:03.914819 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:53:03 crc kubenswrapper[4922]: I0218 11:53:03.915998 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 11:53:03 crc kubenswrapper[4922]: I0218 11:53:03.920725 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-r55b4" Feb 18 11:53:03 crc kubenswrapper[4922]: I0218 11:53:03.972948 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:53:04 crc kubenswrapper[4922]: I0218 11:53:04.053443 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhpfc\" (UniqueName: \"kubernetes.io/projected/2aa305a0-c015-43c2-851c-8eff778238be-kube-api-access-hhpfc\") pod \"kube-state-metrics-0\" (UID: \"2aa305a0-c015-43c2-851c-8eff778238be\") " pod="openstack/kube-state-metrics-0" Feb 18 11:53:04 crc kubenswrapper[4922]: I0218 11:53:04.155656 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhpfc\" (UniqueName: \"kubernetes.io/projected/2aa305a0-c015-43c2-851c-8eff778238be-kube-api-access-hhpfc\") pod \"kube-state-metrics-0\" (UID: \"2aa305a0-c015-43c2-851c-8eff778238be\") " pod="openstack/kube-state-metrics-0" Feb 18 11:53:04 crc kubenswrapper[4922]: I0218 11:53:04.182104 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhpfc\" (UniqueName: \"kubernetes.io/projected/2aa305a0-c015-43c2-851c-8eff778238be-kube-api-access-hhpfc\") pod \"kube-state-metrics-0\" (UID: \"2aa305a0-c015-43c2-851c-8eff778238be\") " pod="openstack/kube-state-metrics-0" Feb 18 11:53:04 crc kubenswrapper[4922]: I0218 11:53:04.255144 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.166426 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.169475 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.174385 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.174566 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.175341 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.175490 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.175641 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.175937 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xmthr" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.179589 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.182055 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.182768 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272078 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272158 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272200 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272225 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6c8\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-kube-api-access-2d6c8\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272248 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272316 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272375 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272400 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272432 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.272456 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374327 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374704 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374738 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374757 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d6c8\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-kube-api-access-2d6c8\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374773 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374842 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374890 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374914 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374949 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.374975 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.375622 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.375671 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.376204 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.379737 4922 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.379797 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d461f0c4a551673a0d7d7003637451f1312f1b9722a2159a051859daee296e97/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.380199 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.392706 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.393191 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.393198 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.393671 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.396772 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d6c8\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-kube-api-access-2d6c8\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.413093 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:05 crc kubenswrapper[4922]: I0218 11:53:05.528526 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.589903 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-996pg"] Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.590901 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.595708 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.595907 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nmmtj" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.596557 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.610293 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-stvc7"] Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.611900 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.618447 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-996pg"] Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.630375 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-stvc7"] Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698481 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkc92\" (UniqueName: \"kubernetes.io/projected/cf286fe0-1b17-475a-b71b-ac4897c2f59d-kube-api-access-bkc92\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698526 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-run\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698553 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-log-ovn\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698581 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d0a226-07e2-402d-a868-2f8374670dac-combined-ca-bundle\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698602 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf286fe0-1b17-475a-b71b-ac4897c2f59d-scripts\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698617 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-run-ovn\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698632 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gngmd\" (UniqueName: \"kubernetes.io/projected/a2d0a226-07e2-402d-a868-2f8374670dac-kube-api-access-gngmd\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698648 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-etc-ovs\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698675 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-run\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698697 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-log\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698713 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-lib\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698753 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d0a226-07e2-402d-a868-2f8374670dac-ovn-controller-tls-certs\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.698775 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2d0a226-07e2-402d-a868-2f8374670dac-scripts\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800266 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkc92\" (UniqueName: \"kubernetes.io/projected/cf286fe0-1b17-475a-b71b-ac4897c2f59d-kube-api-access-bkc92\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800612 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-run\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800655 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-log-ovn\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800691 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d0a226-07e2-402d-a868-2f8374670dac-combined-ca-bundle\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800727 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf286fe0-1b17-475a-b71b-ac4897c2f59d-scripts\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800753 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-run-ovn\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800775 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gngmd\" (UniqueName: \"kubernetes.io/projected/a2d0a226-07e2-402d-a868-2f8374670dac-kube-api-access-gngmd\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800794 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-etc-ovs\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800818 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-run\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800842 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-log\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800860 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-lib\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800902 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d0a226-07e2-402d-a868-2f8374670dac-ovn-controller-tls-certs\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.800927 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2d0a226-07e2-402d-a868-2f8374670dac-scripts\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.801689 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-run-ovn\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.801708 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-etc-ovs\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.801734 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-run\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.801736 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-run\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.801755 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a2d0a226-07e2-402d-a868-2f8374670dac-var-log-ovn\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.801800 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-lib\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.801823 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cf286fe0-1b17-475a-b71b-ac4897c2f59d-var-log\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.803325 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2d0a226-07e2-402d-a868-2f8374670dac-scripts\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.806246 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d0a226-07e2-402d-a868-2f8374670dac-combined-ca-bundle\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.817380 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gngmd\" (UniqueName: \"kubernetes.io/projected/a2d0a226-07e2-402d-a868-2f8374670dac-kube-api-access-gngmd\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.828478 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkc92\" (UniqueName: \"kubernetes.io/projected/cf286fe0-1b17-475a-b71b-ac4897c2f59d-kube-api-access-bkc92\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.830493 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf286fe0-1b17-475a-b71b-ac4897c2f59d-scripts\") pod \"ovn-controller-ovs-stvc7\" (UID: \"cf286fe0-1b17-475a-b71b-ac4897c2f59d\") " pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.857384 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d0a226-07e2-402d-a868-2f8374670dac-ovn-controller-tls-certs\") pod \"ovn-controller-996pg\" (UID: \"a2d0a226-07e2-402d-a868-2f8374670dac\") " pod="openstack/ovn-controller-996pg" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.939282 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:06 crc kubenswrapper[4922]: I0218 11:53:06.947920 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-996pg" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.156776 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.159443 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.162462 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.162851 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.162509 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bn68r" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.162719 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.163809 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.169703 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.208962 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.209465 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.209683 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.209885 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8q7m\" (UniqueName: \"kubernetes.io/projected/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-kube-api-access-q8q7m\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.210014 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.210129 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.210398 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.210533 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-config\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.312013 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.312078 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.312111 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.312180 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8q7m\" (UniqueName: \"kubernetes.io/projected/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-kube-api-access-q8q7m\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.312331 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.312431 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.312764 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.313439 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.313583 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.313673 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.313743 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-config\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.314531 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-config\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.315621 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.315734 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.317279 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.330271 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8q7m\" (UniqueName: \"kubernetes.io/projected/b6f3b4f2-3f65-4278-9cd0-753adfee2ecd-kube-api-access-q8q7m\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.374565 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd\") " pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:07 crc kubenswrapper[4922]: I0218 11:53:07.477330 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:09 crc kubenswrapper[4922]: I0218 11:53:09.807459 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:53:09 crc kubenswrapper[4922]: I0218 11:53:09.807777 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.545649 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.547164 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.552408 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.552503 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ht5pb" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.552649 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.556212 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.576547 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.580633 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/186f064b-a9e8-4637-a5eb-1646f2e1a783-config\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.580725 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.580771 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/186f064b-a9e8-4637-a5eb-1646f2e1a783-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.580817 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.580841 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.580869 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.580922 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/186f064b-a9e8-4637-a5eb-1646f2e1a783-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.580956 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npclz\" (UniqueName: \"kubernetes.io/projected/186f064b-a9e8-4637-a5eb-1646f2e1a783-kube-api-access-npclz\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.682876 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/186f064b-a9e8-4637-a5eb-1646f2e1a783-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.682935 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npclz\" (UniqueName: \"kubernetes.io/projected/186f064b-a9e8-4637-a5eb-1646f2e1a783-kube-api-access-npclz\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.682985 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/186f064b-a9e8-4637-a5eb-1646f2e1a783-config\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.683052 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.683088 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/186f064b-a9e8-4637-a5eb-1646f2e1a783-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.683127 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.683154 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.683186 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.683425 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/186f064b-a9e8-4637-a5eb-1646f2e1a783-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.683449 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.684321 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/186f064b-a9e8-4637-a5eb-1646f2e1a783-config\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.684600 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/186f064b-a9e8-4637-a5eb-1646f2e1a783-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.688140 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.688283 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.692085 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/186f064b-a9e8-4637-a5eb-1646f2e1a783-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.713554 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.740015 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npclz\" (UniqueName: \"kubernetes.io/projected/186f064b-a9e8-4637-a5eb-1646f2e1a783-kube-api-access-npclz\") pod \"ovsdbserver-sb-0\" (UID: \"186f064b-a9e8-4637-a5eb-1646f2e1a783\") " pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:10 crc kubenswrapper[4922]: I0218 11:53:10.867692 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:11 crc kubenswrapper[4922]: I0218 11:53:11.962984 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 11:53:12 crc kubenswrapper[4922]: E0218 11:53:12.497483 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 11:53:12 crc kubenswrapper[4922]: E0218 11:53:12.497656 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9skmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-d5kxc_openstack(277ea358-73ba-466c-bad6-c788f49749a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:53:12 crc kubenswrapper[4922]: E0218 11:53:12.498838 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" podUID="277ea358-73ba-466c-bad6-c788f49749a2" Feb 18 11:53:12 crc kubenswrapper[4922]: E0218 11:53:12.500242 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 11:53:12 crc kubenswrapper[4922]: E0218 11:53:12.500396 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4flx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-r4dpg_openstack(12309c77-cb46-401f-a335-08ca9d74e019): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:53:12 crc kubenswrapper[4922]: E0218 11:53:12.501922 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" podUID="12309c77-cb46-401f-a335-08ca9d74e019" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.035119 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e","Type":"ContainerStarted","Data":"6916ab85b14e41fd88a5f1a98f0b906bdbeec1785f299e2809e2c8b53ae5dd9f"} Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.039662 4922 generic.go:334] "Generic (PLEG): container finished" podID="5728000b-35c8-4748-a8bd-722a9d4da288" containerID="e926ccb5b4c9a13d1cee243527546b45e51365b24bcd44a54ffecfd0423a70d5" exitCode=0 Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.040562 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" event={"ID":"5728000b-35c8-4748-a8bd-722a9d4da288","Type":"ContainerDied","Data":"e926ccb5b4c9a13d1cee243527546b45e51365b24bcd44a54ffecfd0423a70d5"} Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.052137 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.245808 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.253838 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.266535 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcsm9"] Feb 18 11:53:13 crc kubenswrapper[4922]: E0218 11:53:13.307016 4922 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 18 11:53:13 crc kubenswrapper[4922]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5728000b-35c8-4748-a8bd-722a9d4da288/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 18 11:53:13 crc kubenswrapper[4922]: > podSandboxID="05332c9ac22e2e275a6ff8155167585702fc418b67e75c5ce73b68418331a924" Feb 18 11:53:13 crc kubenswrapper[4922]: E0218 11:53:13.307601 4922 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 11:53:13 crc kubenswrapper[4922]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm4rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-rkhdz_openstack(5728000b-35c8-4748-a8bd-722a9d4da288): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5728000b-35c8-4748-a8bd-722a9d4da288/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 18 11:53:13 crc kubenswrapper[4922]: > logger="UnhandledError" Feb 18 11:53:13 crc kubenswrapper[4922]: E0218 11:53:13.309475 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5728000b-35c8-4748-a8bd-722a9d4da288/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.384351 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-stvc7"] Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.395561 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.402134 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.595652 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.609624 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.640942 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9skmt\" (UniqueName: \"kubernetes.io/projected/277ea358-73ba-466c-bad6-c788f49749a2-kube-api-access-9skmt\") pod \"277ea358-73ba-466c-bad6-c788f49749a2\" (UID: \"277ea358-73ba-466c-bad6-c788f49749a2\") " Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.641381 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277ea358-73ba-466c-bad6-c788f49749a2-config\") pod \"277ea358-73ba-466c-bad6-c788f49749a2\" (UID: \"277ea358-73ba-466c-bad6-c788f49749a2\") " Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.641494 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-dns-svc\") pod \"12309c77-cb46-401f-a335-08ca9d74e019\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.641717 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-config\") pod \"12309c77-cb46-401f-a335-08ca9d74e019\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.641746 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4flx4\" (UniqueName: \"kubernetes.io/projected/12309c77-cb46-401f-a335-08ca9d74e019-kube-api-access-4flx4\") pod \"12309c77-cb46-401f-a335-08ca9d74e019\" (UID: \"12309c77-cb46-401f-a335-08ca9d74e019\") " Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.643051 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-config" (OuterVolumeSpecName: "config") pod "12309c77-cb46-401f-a335-08ca9d74e019" (UID: "12309c77-cb46-401f-a335-08ca9d74e019"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.643552 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277ea358-73ba-466c-bad6-c788f49749a2-config" (OuterVolumeSpecName: "config") pod "277ea358-73ba-466c-bad6-c788f49749a2" (UID: "277ea358-73ba-466c-bad6-c788f49749a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.645272 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12309c77-cb46-401f-a335-08ca9d74e019" (UID: "12309c77-cb46-401f-a335-08ca9d74e019"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.647317 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277ea358-73ba-466c-bad6-c788f49749a2-kube-api-access-9skmt" (OuterVolumeSpecName: "kube-api-access-9skmt") pod "277ea358-73ba-466c-bad6-c788f49749a2" (UID: "277ea358-73ba-466c-bad6-c788f49749a2"). InnerVolumeSpecName "kube-api-access-9skmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.650743 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12309c77-cb46-401f-a335-08ca9d74e019-kube-api-access-4flx4" (OuterVolumeSpecName: "kube-api-access-4flx4") pod "12309c77-cb46-401f-a335-08ca9d74e019" (UID: "12309c77-cb46-401f-a335-08ca9d74e019"). InnerVolumeSpecName "kube-api-access-4flx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.745173 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9skmt\" (UniqueName: \"kubernetes.io/projected/277ea358-73ba-466c-bad6-c788f49749a2-kube-api-access-9skmt\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.745208 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277ea358-73ba-466c-bad6-c788f49749a2-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.745217 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.745225 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12309c77-cb46-401f-a335-08ca9d74e019-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.745234 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4flx4\" (UniqueName: \"kubernetes.io/projected/12309c77-cb46-401f-a335-08ca9d74e019-kube-api-access-4flx4\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.751398 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.764036 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-996pg"] Feb 18 11:53:13 crc kubenswrapper[4922]: W0218 11:53:13.780776 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d0a226_07e2_402d_a868_2f8374670dac.slice/crio-3055b44c45c190c60818e0cf5c1c85425414a78117655f4d765cb08f58235bc9 WatchSource:0}: Error finding container 3055b44c45c190c60818e0cf5c1c85425414a78117655f4d765cb08f58235bc9: Status 404 returned error can't find the container with id 3055b44c45c190c60818e0cf5c1c85425414a78117655f4d765cb08f58235bc9 Feb 18 11:53:13 crc kubenswrapper[4922]: I0218 11:53:13.835355 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.050580 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12b84523-522e-4e8c-b78e-0094262fb1f8","Type":"ContainerStarted","Data":"aab8e1d8bb4c1667bc6b73808bdb819ba395465155e9f67195316f9044955cf6"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.052971 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2aa305a0-c015-43c2-851c-8eff778238be","Type":"ContainerStarted","Data":"3a3d098eed640f36965fdadc7b1dd0c83929950b22d8057eb96a4ca71c50bd14"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.055210 4922 generic.go:334] "Generic (PLEG): container finished" podID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerID="cb1206c209e030ad065b0424209cc4f379d4283c70032fb8543af8119687b039" exitCode=0 Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.055260 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" event={"ID":"a9adaf2c-91b1-42f9-8d96-307a08030cce","Type":"ContainerDied","Data":"cb1206c209e030ad065b0424209cc4f379d4283c70032fb8543af8119687b039"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.055316 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" event={"ID":"a9adaf2c-91b1-42f9-8d96-307a08030cce","Type":"ContainerStarted","Data":"8aa2b445a2e846f21de5ba41b7250cd1d270eacaa01407089987d4676bfb4dcd"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.059487 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" event={"ID":"12309c77-cb46-401f-a335-08ca9d74e019","Type":"ContainerDied","Data":"b35ffd4661bbe49f0defaf1ce8038237c6db0e474221315740b32c40e8be885c"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.059559 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-r4dpg" Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.061924 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0ce20f52-4b9d-47a6-8da7-c64cd1d15623","Type":"ContainerStarted","Data":"1c52538cc464082ecb9a366072d250142efbfd791d4f3400f11f51c22f9cd1c6"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.063451 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"186f064b-a9e8-4637-a5eb-1646f2e1a783","Type":"ContainerStarted","Data":"79466567273a5850a41b267901e4466ff74878fe7f3d6561388315f310a215ca"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.064565 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stvc7" event={"ID":"cf286fe0-1b17-475a-b71b-ac4897c2f59d","Type":"ContainerStarted","Data":"9f9b7fad52923615947a0d6e7151a78b4ebb8b6e007995f60cadafcce5688fe4"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.067203 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerStarted","Data":"bf8e622508a488fa6a5aab10a0db437c746f98562fadc68cb32fcd3c28724d08"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.069855 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cef557d2-b935-4cf6-98f1-d3c2251c0e38","Type":"ContainerStarted","Data":"d9788f1e654fa9ba3ad3f0a6ae9798137af27ad57e7e68121b8391b0725d166c"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.071433 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" event={"ID":"277ea358-73ba-466c-bad6-c788f49749a2","Type":"ContainerDied","Data":"be1e6e7fb4233f0222ce6fa46078e0d2c0447caff506e59746208cd3058261a5"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.071527 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-d5kxc" Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.073016 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"873b23d0-3c83-4ab7-8178-1c4832c544a0","Type":"ContainerStarted","Data":"f0b5c3b14ba2985c7887a419fa639114ee9a1d29ac6136b93fd46a9476daf2bd"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.084172 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-996pg" event={"ID":"a2d0a226-07e2-402d-a868-2f8374670dac","Type":"ContainerStarted","Data":"3055b44c45c190c60818e0cf5c1c85425414a78117655f4d765cb08f58235bc9"} Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.179642 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r4dpg"] Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.188272 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r4dpg"] Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.201205 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d5kxc"] Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.209072 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-d5kxc"] Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.675792 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 11:53:14 crc kubenswrapper[4922]: W0218 11:53:14.697589 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6f3b4f2_3f65_4278_9cd0_753adfee2ecd.slice/crio-095284e789ef763dbe0314370172e682e42b58e817284911a877b0672eea210f WatchSource:0}: Error finding container 095284e789ef763dbe0314370172e682e42b58e817284911a877b0672eea210f: Status 404 returned error can't find the container with id 095284e789ef763dbe0314370172e682e42b58e817284911a877b0672eea210f Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.984223 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12309c77-cb46-401f-a335-08ca9d74e019" path="/var/lib/kubelet/pods/12309c77-cb46-401f-a335-08ca9d74e019/volumes" Feb 18 11:53:14 crc kubenswrapper[4922]: I0218 11:53:14.984590 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277ea358-73ba-466c-bad6-c788f49749a2" path="/var/lib/kubelet/pods/277ea358-73ba-466c-bad6-c788f49749a2/volumes" Feb 18 11:53:15 crc kubenswrapper[4922]: I0218 11:53:15.093044 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" event={"ID":"5728000b-35c8-4748-a8bd-722a9d4da288","Type":"ContainerStarted","Data":"7d7de311b38252b803029fed4ce2a344d9898ebcba31e61063d58705f1576aca"} Feb 18 11:53:15 crc kubenswrapper[4922]: I0218 11:53:15.093272 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:53:15 crc kubenswrapper[4922]: I0218 11:53:15.094780 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd","Type":"ContainerStarted","Data":"095284e789ef763dbe0314370172e682e42b58e817284911a877b0672eea210f"} Feb 18 11:53:15 crc kubenswrapper[4922]: I0218 11:53:15.096841 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" event={"ID":"a9adaf2c-91b1-42f9-8d96-307a08030cce","Type":"ContainerStarted","Data":"bade2e297d9e59c84721045c2242b2738e6af09fe8594f0328f6a4f10a05370f"} Feb 18 11:53:15 crc kubenswrapper[4922]: I0218 11:53:15.097007 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:53:15 crc kubenswrapper[4922]: I0218 11:53:15.112076 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" podStartSLOduration=6.585709778 podStartE2EDuration="18.112061417s" podCreationTimestamp="2026-02-18 11:52:57 +0000 UTC" firstStartedPulling="2026-02-18 11:53:01.180649 +0000 UTC m=+982.908353080" lastFinishedPulling="2026-02-18 11:53:12.707000639 +0000 UTC m=+994.434704719" observedRunningTime="2026-02-18 11:53:15.111538284 +0000 UTC m=+996.839242374" watchObservedRunningTime="2026-02-18 11:53:15.112061417 +0000 UTC m=+996.839765507" Feb 18 11:53:19 crc kubenswrapper[4922]: I0218 11:53:19.005476 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" podStartSLOduration=22.005460706 podStartE2EDuration="22.005460706s" podCreationTimestamp="2026-02-18 11:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:15.157752188 +0000 UTC m=+996.885456278" watchObservedRunningTime="2026-02-18 11:53:19.005460706 +0000 UTC m=+1000.733164776" Feb 18 11:53:19 crc kubenswrapper[4922]: I0218 11:53:19.904407 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wknpt"] Feb 18 11:53:19 crc kubenswrapper[4922]: I0218 11:53:19.907684 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:19 crc kubenswrapper[4922]: I0218 11:53:19.913574 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 11:53:19 crc kubenswrapper[4922]: I0218 11:53:19.942241 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wknpt"] Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.057853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-ovn-rundir\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.057965 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-ovs-rundir\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.057988 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.058008 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-combined-ca-bundle\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.058029 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7vs\" (UniqueName: \"kubernetes.io/projected/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-kube-api-access-nt7vs\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.058072 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-config\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.067468 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rkhdz"] Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.067682 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" containerName="dnsmasq-dns" containerID="cri-o://7d7de311b38252b803029fed4ce2a344d9898ebcba31e61063d58705f1576aca" gracePeriod=10 Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.069627 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.126712 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nhn5n"] Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.128879 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.133899 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.156700 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nhn5n"] Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.169249 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-ovn-rundir\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.169402 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-ovs-rundir\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.169435 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.169722 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-combined-ca-bundle\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.169750 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt7vs\" (UniqueName: \"kubernetes.io/projected/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-kube-api-access-nt7vs\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.169896 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-config\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.171223 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-ovn-rundir\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.171277 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-ovs-rundir\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.171347 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-config\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.184421 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.194773 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt7vs\" (UniqueName: \"kubernetes.io/projected/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-kube-api-access-nt7vs\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.228134 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c831c6ce-ca0c-4f7d-8268-b4efe13e687d-combined-ca-bundle\") pod \"ovn-controller-metrics-wknpt\" (UID: \"c831c6ce-ca0c-4f7d-8268-b4efe13e687d\") " pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.233208 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wknpt" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.237708 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcsm9"] Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.238051 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" podUID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerName="dnsmasq-dns" containerID="cri-o://bade2e297d9e59c84721045c2242b2738e6af09fe8594f0328f6a4f10a05370f" gracePeriod=10 Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.249734 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.263504 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-mt2n6"] Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.265240 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.268648 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274333 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-config\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274407 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274461 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274504 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74b5\" (UniqueName: \"kubernetes.io/projected/30419d6d-3999-43ef-8cd9-07143299061a-kube-api-access-d74b5\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274530 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xglxs\" (UniqueName: \"kubernetes.io/projected/66ee71b5-db58-4478-94c7-0067be9c018e-kube-api-access-xglxs\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274589 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274641 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-config\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274667 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.274706 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-dns-svc\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.287070 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mt2n6"] Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.375969 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-config\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.376035 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.376085 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.376516 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d74b5\" (UniqueName: \"kubernetes.io/projected/30419d6d-3999-43ef-8cd9-07143299061a-kube-api-access-d74b5\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.376588 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xglxs\" (UniqueName: \"kubernetes.io/projected/66ee71b5-db58-4478-94c7-0067be9c018e-kube-api-access-xglxs\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.376783 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.376875 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-config\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.376895 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.376925 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-dns-svc\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.377418 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.377770 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.377863 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.378202 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-dns-svc\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.378199 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-config\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.378438 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.378614 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-config\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.401531 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d74b5\" (UniqueName: \"kubernetes.io/projected/30419d6d-3999-43ef-8cd9-07143299061a-kube-api-access-d74b5\") pod \"dnsmasq-dns-5bf47b49b7-nhn5n\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.402103 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xglxs\" (UniqueName: \"kubernetes.io/projected/66ee71b5-db58-4478-94c7-0067be9c018e-kube-api-access-xglxs\") pod \"dnsmasq-dns-8554648995-mt2n6\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.609867 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:20 crc kubenswrapper[4922]: I0218 11:53:20.667116 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:21 crc kubenswrapper[4922]: I0218 11:53:21.151708 4922 generic.go:334] "Generic (PLEG): container finished" podID="5728000b-35c8-4748-a8bd-722a9d4da288" containerID="7d7de311b38252b803029fed4ce2a344d9898ebcba31e61063d58705f1576aca" exitCode=0 Feb 18 11:53:21 crc kubenswrapper[4922]: I0218 11:53:21.151769 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" event={"ID":"5728000b-35c8-4748-a8bd-722a9d4da288","Type":"ContainerDied","Data":"7d7de311b38252b803029fed4ce2a344d9898ebcba31e61063d58705f1576aca"} Feb 18 11:53:21 crc kubenswrapper[4922]: I0218 11:53:21.154532 4922 generic.go:334] "Generic (PLEG): container finished" podID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerID="bade2e297d9e59c84721045c2242b2738e6af09fe8594f0328f6a4f10a05370f" exitCode=0 Feb 18 11:53:21 crc kubenswrapper[4922]: I0218 11:53:21.154584 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" event={"ID":"a9adaf2c-91b1-42f9-8d96-307a08030cce","Type":"ContainerDied","Data":"bade2e297d9e59c84721045c2242b2738e6af09fe8594f0328f6a4f10a05370f"} Feb 18 11:53:22 crc kubenswrapper[4922]: I0218 11:53:22.390096 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: connect: connection refused" Feb 18 11:53:23 crc kubenswrapper[4922]: I0218 11:53:23.800917 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:53:23 crc kubenswrapper[4922]: I0218 11:53:23.946910 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-dns-svc\") pod \"a9adaf2c-91b1-42f9-8d96-307a08030cce\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " Feb 18 11:53:23 crc kubenswrapper[4922]: I0218 11:53:23.947280 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-config\") pod \"a9adaf2c-91b1-42f9-8d96-307a08030cce\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " Feb 18 11:53:23 crc kubenswrapper[4922]: I0218 11:53:23.947334 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7kzn\" (UniqueName: \"kubernetes.io/projected/a9adaf2c-91b1-42f9-8d96-307a08030cce-kube-api-access-n7kzn\") pod \"a9adaf2c-91b1-42f9-8d96-307a08030cce\" (UID: \"a9adaf2c-91b1-42f9-8d96-307a08030cce\") " Feb 18 11:53:23 crc kubenswrapper[4922]: I0218 11:53:23.954716 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9adaf2c-91b1-42f9-8d96-307a08030cce-kube-api-access-n7kzn" (OuterVolumeSpecName: "kube-api-access-n7kzn") pod "a9adaf2c-91b1-42f9-8d96-307a08030cce" (UID: "a9adaf2c-91b1-42f9-8d96-307a08030cce"). InnerVolumeSpecName "kube-api-access-n7kzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:23 crc kubenswrapper[4922]: I0218 11:53:23.982495 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-config" (OuterVolumeSpecName: "config") pod "a9adaf2c-91b1-42f9-8d96-307a08030cce" (UID: "a9adaf2c-91b1-42f9-8d96-307a08030cce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:23 crc kubenswrapper[4922]: I0218 11:53:23.998016 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9adaf2c-91b1-42f9-8d96-307a08030cce" (UID: "a9adaf2c-91b1-42f9-8d96-307a08030cce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.049154 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.049193 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9adaf2c-91b1-42f9-8d96-307a08030cce-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.049211 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7kzn\" (UniqueName: \"kubernetes.io/projected/a9adaf2c-91b1-42f9-8d96-307a08030cce-kube-api-access-n7kzn\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.176710 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" event={"ID":"a9adaf2c-91b1-42f9-8d96-307a08030cce","Type":"ContainerDied","Data":"8aa2b445a2e846f21de5ba41b7250cd1d270eacaa01407089987d4676bfb4dcd"} Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.176737 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.176777 4922 scope.go:117] "RemoveContainer" containerID="bade2e297d9e59c84721045c2242b2738e6af09fe8594f0328f6a4f10a05370f" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.211972 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcsm9"] Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.218038 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gcsm9"] Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.267010 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.457272 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-dns-svc\") pod \"5728000b-35c8-4748-a8bd-722a9d4da288\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.457380 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm4rb\" (UniqueName: \"kubernetes.io/projected/5728000b-35c8-4748-a8bd-722a9d4da288-kube-api-access-zm4rb\") pod \"5728000b-35c8-4748-a8bd-722a9d4da288\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.457423 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-config\") pod \"5728000b-35c8-4748-a8bd-722a9d4da288\" (UID: \"5728000b-35c8-4748-a8bd-722a9d4da288\") " Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.463221 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5728000b-35c8-4748-a8bd-722a9d4da288-kube-api-access-zm4rb" (OuterVolumeSpecName: "kube-api-access-zm4rb") pod "5728000b-35c8-4748-a8bd-722a9d4da288" (UID: "5728000b-35c8-4748-a8bd-722a9d4da288"). InnerVolumeSpecName "kube-api-access-zm4rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.498197 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-config" (OuterVolumeSpecName: "config") pod "5728000b-35c8-4748-a8bd-722a9d4da288" (UID: "5728000b-35c8-4748-a8bd-722a9d4da288"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.503545 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5728000b-35c8-4748-a8bd-722a9d4da288" (UID: "5728000b-35c8-4748-a8bd-722a9d4da288"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.559057 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.559100 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5728000b-35c8-4748-a8bd-722a9d4da288-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.559111 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm4rb\" (UniqueName: \"kubernetes.io/projected/5728000b-35c8-4748-a8bd-722a9d4da288-kube-api-access-zm4rb\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:24 crc kubenswrapper[4922]: I0218 11:53:24.984038 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9adaf2c-91b1-42f9-8d96-307a08030cce" path="/var/lib/kubelet/pods/a9adaf2c-91b1-42f9-8d96-307a08030cce/volumes" Feb 18 11:53:25 crc kubenswrapper[4922]: I0218 11:53:25.185547 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" event={"ID":"5728000b-35c8-4748-a8bd-722a9d4da288","Type":"ContainerDied","Data":"05332c9ac22e2e275a6ff8155167585702fc418b67e75c5ce73b68418331a924"} Feb 18 11:53:25 crc kubenswrapper[4922]: I0218 11:53:25.185651 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rkhdz" Feb 18 11:53:25 crc kubenswrapper[4922]: I0218 11:53:25.240087 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rkhdz"] Feb 18 11:53:25 crc kubenswrapper[4922]: I0218 11:53:25.245849 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rkhdz"] Feb 18 11:53:25 crc kubenswrapper[4922]: I0218 11:53:25.433301 4922 scope.go:117] "RemoveContainer" containerID="cb1206c209e030ad065b0424209cc4f379d4283c70032fb8543af8119687b039" Feb 18 11:53:25 crc kubenswrapper[4922]: I0218 11:53:25.824328 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wknpt"] Feb 18 11:53:26 crc kubenswrapper[4922]: I0218 11:53:26.151424 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mt2n6"] Feb 18 11:53:26 crc kubenswrapper[4922]: I0218 11:53:26.289109 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nhn5n"] Feb 18 11:53:26 crc kubenswrapper[4922]: W0218 11:53:26.507482 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66ee71b5_db58_4478_94c7_0067be9c018e.slice/crio-ff48b67de3bebc231cad1e6022f943d477fb1723890c418485b7af3b8fa2de11 WatchSource:0}: Error finding container ff48b67de3bebc231cad1e6022f943d477fb1723890c418485b7af3b8fa2de11: Status 404 returned error can't find the container with id ff48b67de3bebc231cad1e6022f943d477fb1723890c418485b7af3b8fa2de11 Feb 18 11:53:26 crc kubenswrapper[4922]: W0218 11:53:26.513046 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30419d6d_3999_43ef_8cd9_07143299061a.slice/crio-d25632e30548614105c4a5a234e63a61b760dc62618152fb02637520873fb6e0 WatchSource:0}: Error finding container d25632e30548614105c4a5a234e63a61b760dc62618152fb02637520873fb6e0: Status 404 returned error can't find the container with id d25632e30548614105c4a5a234e63a61b760dc62618152fb02637520873fb6e0 Feb 18 11:53:26 crc kubenswrapper[4922]: I0218 11:53:26.520186 4922 scope.go:117] "RemoveContainer" containerID="7d7de311b38252b803029fed4ce2a344d9898ebcba31e61063d58705f1576aca" Feb 18 11:53:26 crc kubenswrapper[4922]: I0218 11:53:26.987291 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" path="/var/lib/kubelet/pods/5728000b-35c8-4748-a8bd-722a9d4da288/volumes" Feb 18 11:53:27 crc kubenswrapper[4922]: I0218 11:53:27.202645 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wknpt" event={"ID":"c831c6ce-ca0c-4f7d-8268-b4efe13e687d","Type":"ContainerStarted","Data":"062f7f43eb7d96421bec1a95ff80452743a6e5e6148aeae796eecb137024ca1f"} Feb 18 11:53:27 crc kubenswrapper[4922]: I0218 11:53:27.203953 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mt2n6" event={"ID":"66ee71b5-db58-4478-94c7-0067be9c018e","Type":"ContainerStarted","Data":"ff48b67de3bebc231cad1e6022f943d477fb1723890c418485b7af3b8fa2de11"} Feb 18 11:53:27 crc kubenswrapper[4922]: I0218 11:53:27.205169 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" event={"ID":"30419d6d-3999-43ef-8cd9-07143299061a","Type":"ContainerStarted","Data":"d25632e30548614105c4a5a234e63a61b760dc62618152fb02637520873fb6e0"} Feb 18 11:53:27 crc kubenswrapper[4922]: I0218 11:53:27.685885 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-gcsm9" podUID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: i/o timeout" Feb 18 11:53:38 crc kubenswrapper[4922]: I0218 11:53:38.618445 4922 scope.go:117] "RemoveContainer" containerID="e926ccb5b4c9a13d1cee243527546b45e51365b24bcd44a54ffecfd0423a70d5" Feb 18 11:53:39 crc kubenswrapper[4922]: E0218 11:53:39.157518 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 18 11:53:39 crc kubenswrapper[4922]: E0218 11:53:39.157831 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 18 11:53:39 crc kubenswrapper[4922]: E0218 11:53:39.157956 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hhpfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(2aa305a0-c015-43c2-851c-8eff778238be): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:53:39 crc kubenswrapper[4922]: E0218 11:53:39.159114 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="2aa305a0-c015-43c2-851c-8eff778238be" Feb 18 11:53:39 crc kubenswrapper[4922]: I0218 11:53:39.304327 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"873b23d0-3c83-4ab7-8178-1c4832c544a0","Type":"ContainerStarted","Data":"4aa86186b70bbaa1d88853213d84ce4bbe639bf86efad906f20d670b7d784b7c"} Feb 18 11:53:39 crc kubenswrapper[4922]: E0218 11:53:39.307801 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="2aa305a0-c015-43c2-851c-8eff778238be" Feb 18 11:53:39 crc kubenswrapper[4922]: I0218 11:53:39.807923 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:53:39 crc kubenswrapper[4922]: I0218 11:53:39.807986 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:53:40 crc kubenswrapper[4922]: I0218 11:53:40.314084 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cef557d2-b935-4cf6-98f1-d3c2251c0e38","Type":"ContainerStarted","Data":"fcebb6698c6e4874e1a9ec8fbf1e0b1f0b32b5ba69a663e4d66216e0e480bd70"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.345832 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wknpt" event={"ID":"c831c6ce-ca0c-4f7d-8268-b4efe13e687d","Type":"ContainerStarted","Data":"4b2b9c1b4820b77aa29de7ecfd5734aa99c8ab5af9603f71c0064adb4ecf2c6c"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.349806 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"186f064b-a9e8-4637-a5eb-1646f2e1a783","Type":"ContainerStarted","Data":"b66989a7d2943a9028c5fc56e00cc93539607f92c8a52e2b9af3989791e7064f"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.351942 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e","Type":"ContainerStarted","Data":"048e6e6205c00557e2deae0c2b161998505a6264e0d0f3a1a72bad74824fd36b"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.353866 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0ce20f52-4b9d-47a6-8da7-c64cd1d15623","Type":"ContainerStarted","Data":"a97510ff59feaa5e50c719a472be949f9a003a8a6e7db8f428877d2b7e6f59fe"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.354387 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.355607 4922 generic.go:334] "Generic (PLEG): container finished" podID="66ee71b5-db58-4478-94c7-0067be9c018e" containerID="4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969" exitCode=0 Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.355658 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mt2n6" event={"ID":"66ee71b5-db58-4478-94c7-0067be9c018e","Type":"ContainerDied","Data":"4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.359517 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-996pg" event={"ID":"a2d0a226-07e2-402d-a868-2f8374670dac","Type":"ContainerStarted","Data":"9ca45559eb9e2e8e5645e715736a5505eee43b0dc10bcda0685ee036514107c4"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.359634 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-996pg" Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.361235 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stvc7" event={"ID":"cf286fe0-1b17-475a-b71b-ac4897c2f59d","Type":"ContainerStarted","Data":"1aa1946e5f937a792f7f4beb6cc426625f028314b133073cad0066076976cbfc"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.364925 4922 generic.go:334] "Generic (PLEG): container finished" podID="30419d6d-3999-43ef-8cd9-07143299061a" containerID="60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf" exitCode=0 Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.365136 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" event={"ID":"30419d6d-3999-43ef-8cd9-07143299061a","Type":"ContainerDied","Data":"60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.366481 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wknpt" podStartSLOduration=8.678920049 podStartE2EDuration="22.3664615s" podCreationTimestamp="2026-02-18 11:53:19 +0000 UTC" firstStartedPulling="2026-02-18 11:53:26.50318256 +0000 UTC m=+1008.230886660" lastFinishedPulling="2026-02-18 11:53:40.190724031 +0000 UTC m=+1021.918428111" observedRunningTime="2026-02-18 11:53:41.361627897 +0000 UTC m=+1023.089331977" watchObservedRunningTime="2026-02-18 11:53:41.3664615 +0000 UTC m=+1023.094165580" Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.376341 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12b84523-522e-4e8c-b78e-0094262fb1f8","Type":"ContainerStarted","Data":"e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.380036 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd","Type":"ContainerStarted","Data":"cce6c10d0b2be9b757f773def29c1072ae8928bec75e04b265c07a6436f35911"} Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.459485 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.731091133 podStartE2EDuration="40.459462373s" podCreationTimestamp="2026-02-18 11:53:01 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.426737454 +0000 UTC m=+995.154441534" lastFinishedPulling="2026-02-18 11:53:25.155108694 +0000 UTC m=+1006.882812774" observedRunningTime="2026-02-18 11:53:41.428759053 +0000 UTC m=+1023.156463163" watchObservedRunningTime="2026-02-18 11:53:41.459462373 +0000 UTC m=+1023.187166453" Feb 18 11:53:41 crc kubenswrapper[4922]: I0218 11:53:41.461273 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-996pg" podStartSLOduration=23.812160344 podStartE2EDuration="35.461267069s" podCreationTimestamp="2026-02-18 11:53:06 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.784196865 +0000 UTC m=+995.511900945" lastFinishedPulling="2026-02-18 11:53:25.43330359 +0000 UTC m=+1007.161007670" observedRunningTime="2026-02-18 11:53:41.449987982 +0000 UTC m=+1023.177692062" watchObservedRunningTime="2026-02-18 11:53:41.461267069 +0000 UTC m=+1023.188971149" Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.390448 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b6f3b4f2-3f65-4278-9cd0-753adfee2ecd","Type":"ContainerStarted","Data":"c199f7543b182f63fa0fbb340ec1e9ffc666d55129b7e5d914721d449dd1bcab"} Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.392678 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mt2n6" event={"ID":"66ee71b5-db58-4478-94c7-0067be9c018e","Type":"ContainerStarted","Data":"4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94"} Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.392836 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.394894 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"186f064b-a9e8-4637-a5eb-1646f2e1a783","Type":"ContainerStarted","Data":"6e8120987d4ecbdfcb1b57bf6cf20f094da9b59df908aa1d02aed03319948576"} Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.396321 4922 generic.go:334] "Generic (PLEG): container finished" podID="cf286fe0-1b17-475a-b71b-ac4897c2f59d" containerID="1aa1946e5f937a792f7f4beb6cc426625f028314b133073cad0066076976cbfc" exitCode=0 Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.396380 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stvc7" event={"ID":"cf286fe0-1b17-475a-b71b-ac4897c2f59d","Type":"ContainerDied","Data":"1aa1946e5f937a792f7f4beb6cc426625f028314b133073cad0066076976cbfc"} Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.399291 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" event={"ID":"30419d6d-3999-43ef-8cd9-07143299061a","Type":"ContainerStarted","Data":"e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a"} Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.419901 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.612548705000002 podStartE2EDuration="36.419876212s" podCreationTimestamp="2026-02-18 11:53:06 +0000 UTC" firstStartedPulling="2026-02-18 11:53:14.700089741 +0000 UTC m=+996.427793821" lastFinishedPulling="2026-02-18 11:53:26.507417238 +0000 UTC m=+1008.235121328" observedRunningTime="2026-02-18 11:53:42.410809032 +0000 UTC m=+1024.138513132" watchObservedRunningTime="2026-02-18 11:53:42.419876212 +0000 UTC m=+1024.147580292" Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.433130 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-mt2n6" podStartSLOduration=22.433110908 podStartE2EDuration="22.433110908s" podCreationTimestamp="2026-02-18 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:42.427087285 +0000 UTC m=+1024.154791365" watchObservedRunningTime="2026-02-18 11:53:42.433110908 +0000 UTC m=+1024.160814988" Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.477446 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.496813 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.905350945 podStartE2EDuration="33.496792336s" podCreationTimestamp="2026-02-18 11:53:09 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.842505746 +0000 UTC m=+995.570209826" lastFinishedPulling="2026-02-18 11:53:25.433947137 +0000 UTC m=+1007.161651217" observedRunningTime="2026-02-18 11:53:42.468667862 +0000 UTC m=+1024.196371942" watchObservedRunningTime="2026-02-18 11:53:42.496792336 +0000 UTC m=+1024.224496416" Feb 18 11:53:42 crc kubenswrapper[4922]: I0218 11:53:42.499111 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" podStartSLOduration=22.499088165 podStartE2EDuration="22.499088165s" podCreationTimestamp="2026-02-18 11:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:42.495913274 +0000 UTC m=+1024.223617354" watchObservedRunningTime="2026-02-18 11:53:42.499088165 +0000 UTC m=+1024.226792245" Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.409026 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerStarted","Data":"8a6f423971fc2df44ffe536b7fffc2792fbaae14dda7a484a39a6fbec81c8d3e"} Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.410261 4922 generic.go:334] "Generic (PLEG): container finished" podID="873b23d0-3c83-4ab7-8178-1c4832c544a0" containerID="4aa86186b70bbaa1d88853213d84ce4bbe639bf86efad906f20d670b7d784b7c" exitCode=0 Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.410312 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"873b23d0-3c83-4ab7-8178-1c4832c544a0","Type":"ContainerDied","Data":"4aa86186b70bbaa1d88853213d84ce4bbe639bf86efad906f20d670b7d784b7c"} Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.417108 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stvc7" event={"ID":"cf286fe0-1b17-475a-b71b-ac4897c2f59d","Type":"ContainerStarted","Data":"247a4a35d0562231316ad55424b35b3d988a6065476c802b024259ecd57da27b"} Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.417146 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-stvc7" event={"ID":"cf286fe0-1b17-475a-b71b-ac4897c2f59d","Type":"ContainerStarted","Data":"560f8f6a75194e2a684dcb5eebe01dc8e4f7d90457e9ab9e0f882f2638c4c36b"} Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.417162 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.418437 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.418482 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.471748 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-stvc7" podStartSLOduration=25.717471905 podStartE2EDuration="37.471725853s" podCreationTimestamp="2026-02-18 11:53:06 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.402413496 +0000 UTC m=+995.130117576" lastFinishedPulling="2026-02-18 11:53:25.156667444 +0000 UTC m=+1006.884371524" observedRunningTime="2026-02-18 11:53:43.471692043 +0000 UTC m=+1025.199396143" watchObservedRunningTime="2026-02-18 11:53:43.471725853 +0000 UTC m=+1025.199429943" Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.477824 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:43 crc kubenswrapper[4922]: I0218 11:53:43.868443 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:44 crc kubenswrapper[4922]: I0218 11:53:44.424749 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"873b23d0-3c83-4ab7-8178-1c4832c544a0","Type":"ContainerStarted","Data":"0307943daf75af30bf450dbdaa6f8cf8fd9ee96038755b9083999ea56738fb10"} Feb 18 11:53:45 crc kubenswrapper[4922]: I0218 11:53:45.432201 4922 generic.go:334] "Generic (PLEG): container finished" podID="302e3b56-c5a4-4e80-bb7e-a9e6a61a119e" containerID="048e6e6205c00557e2deae0c2b161998505a6264e0d0f3a1a72bad74824fd36b" exitCode=0 Feb 18 11:53:45 crc kubenswrapper[4922]: I0218 11:53:45.432261 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e","Type":"ContainerDied","Data":"048e6e6205c00557e2deae0c2b161998505a6264e0d0f3a1a72bad74824fd36b"} Feb 18 11:53:45 crc kubenswrapper[4922]: I0218 11:53:45.454143 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.001173341 podStartE2EDuration="46.454124175s" podCreationTimestamp="2026-02-18 11:52:59 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.275753018 +0000 UTC m=+995.003457098" lastFinishedPulling="2026-02-18 11:53:24.728703852 +0000 UTC m=+1006.456407932" observedRunningTime="2026-02-18 11:53:44.45171488 +0000 UTC m=+1026.179418980" watchObservedRunningTime="2026-02-18 11:53:45.454124175 +0000 UTC m=+1027.181828255" Feb 18 11:53:45 crc kubenswrapper[4922]: I0218 11:53:45.867955 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:46 crc kubenswrapper[4922]: I0218 11:53:46.443289 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"302e3b56-c5a4-4e80-bb7e-a9e6a61a119e","Type":"ContainerStarted","Data":"5531d27efb6d88e6ac3081f1f498415eb1081729a663f20139846ae7b13dcf3f"} Feb 18 11:53:46 crc kubenswrapper[4922]: I0218 11:53:46.523505 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:46 crc kubenswrapper[4922]: I0218 11:53:46.542449 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=35.691148417 podStartE2EDuration="48.542429663s" podCreationTimestamp="2026-02-18 11:52:58 +0000 UTC" firstStartedPulling="2026-02-18 11:53:12.565645938 +0000 UTC m=+994.293350018" lastFinishedPulling="2026-02-18 11:53:25.416927154 +0000 UTC m=+1007.144631264" observedRunningTime="2026-02-18 11:53:46.466875184 +0000 UTC m=+1028.194579264" watchObservedRunningTime="2026-02-18 11:53:46.542429663 +0000 UTC m=+1028.270133743" Feb 18 11:53:46 crc kubenswrapper[4922]: I0218 11:53:46.560151 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 11:53:46 crc kubenswrapper[4922]: I0218 11:53:46.652879 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 11:53:46 crc kubenswrapper[4922]: I0218 11:53:46.903437 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:46 crc kubenswrapper[4922]: I0218 11:53:46.947932 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.116033 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 11:53:47 crc kubenswrapper[4922]: E0218 11:53:47.116395 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerName="dnsmasq-dns" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.116415 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerName="dnsmasq-dns" Feb 18 11:53:47 crc kubenswrapper[4922]: E0218 11:53:47.116445 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" containerName="init" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.116454 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" containerName="init" Feb 18 11:53:47 crc kubenswrapper[4922]: E0218 11:53:47.116466 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" containerName="dnsmasq-dns" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.116472 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" containerName="dnsmasq-dns" Feb 18 11:53:47 crc kubenswrapper[4922]: E0218 11:53:47.116485 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerName="init" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.116490 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerName="init" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.116653 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5728000b-35c8-4748-a8bd-722a9d4da288" containerName="dnsmasq-dns" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.116666 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9adaf2c-91b1-42f9-8d96-307a08030cce" containerName="dnsmasq-dns" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.117574 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.120267 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-q9qpq" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.120293 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.120545 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.124356 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.139073 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.161195 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.161392 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.161457 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09bbc755-2862-437b-9ef3-515103f77710-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.161481 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.161524 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09bbc755-2862-437b-9ef3-515103f77710-scripts\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.161545 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqpph\" (UniqueName: \"kubernetes.io/projected/09bbc755-2862-437b-9ef3-515103f77710-kube-api-access-xqpph\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.161590 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bbc755-2862-437b-9ef3-515103f77710-config\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.263568 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqpph\" (UniqueName: \"kubernetes.io/projected/09bbc755-2862-437b-9ef3-515103f77710-kube-api-access-xqpph\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.263641 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bbc755-2862-437b-9ef3-515103f77710-config\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.263682 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.263736 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.263777 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09bbc755-2862-437b-9ef3-515103f77710-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.263791 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.263820 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09bbc755-2862-437b-9ef3-515103f77710-scripts\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.264426 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/09bbc755-2862-437b-9ef3-515103f77710-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.264725 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09bbc755-2862-437b-9ef3-515103f77710-scripts\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.264767 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09bbc755-2862-437b-9ef3-515103f77710-config\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.269010 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.269149 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.271064 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bbc755-2862-437b-9ef3-515103f77710-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.284717 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqpph\" (UniqueName: \"kubernetes.io/projected/09bbc755-2862-437b-9ef3-515103f77710-kube-api-access-xqpph\") pod \"ovn-northd-0\" (UID: \"09bbc755-2862-437b-9ef3-515103f77710\") " pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.435455 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 11:53:47 crc kubenswrapper[4922]: I0218 11:53:47.922741 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 11:53:48 crc kubenswrapper[4922]: I0218 11:53:48.476623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09bbc755-2862-437b-9ef3-515103f77710","Type":"ContainerStarted","Data":"d10d4588f87932e6c854e291f60ea902608e9bab1bae55bffccfe799d6dfe77d"} Feb 18 11:53:48 crc kubenswrapper[4922]: I0218 11:53:48.478586 4922 generic.go:334] "Generic (PLEG): container finished" podID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerID="8a6f423971fc2df44ffe536b7fffc2792fbaae14dda7a484a39a6fbec81c8d3e" exitCode=0 Feb 18 11:53:48 crc kubenswrapper[4922]: I0218 11:53:48.478623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerDied","Data":"8a6f423971fc2df44ffe536b7fffc2792fbaae14dda7a484a39a6fbec81c8d3e"} Feb 18 11:53:49 crc kubenswrapper[4922]: I0218 11:53:49.510261 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09bbc755-2862-437b-9ef3-515103f77710","Type":"ContainerStarted","Data":"9e408146c1e2977c9d8ebe6b86cf912809e85489b9a69912260744ae6bb8bac7"} Feb 18 11:53:49 crc kubenswrapper[4922]: I0218 11:53:49.510703 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"09bbc755-2862-437b-9ef3-515103f77710","Type":"ContainerStarted","Data":"0c3bd53d7f6433881def4720837acaf2e86ea1f115702f6575b37035ffd2e8fd"} Feb 18 11:53:49 crc kubenswrapper[4922]: I0218 11:53:49.510732 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 11:53:49 crc kubenswrapper[4922]: I0218 11:53:49.540541 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.483582376 podStartE2EDuration="2.540520857s" podCreationTimestamp="2026-02-18 11:53:47 +0000 UTC" firstStartedPulling="2026-02-18 11:53:47.926924365 +0000 UTC m=+1029.654628445" lastFinishedPulling="2026-02-18 11:53:48.983862856 +0000 UTC m=+1030.711566926" observedRunningTime="2026-02-18 11:53:49.532858833 +0000 UTC m=+1031.260562923" watchObservedRunningTime="2026-02-18 11:53:49.540520857 +0000 UTC m=+1031.268224937" Feb 18 11:53:50 crc kubenswrapper[4922]: I0218 11:53:50.021542 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 11:53:50 crc kubenswrapper[4922]: I0218 11:53:50.021778 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 11:53:50 crc kubenswrapper[4922]: I0218 11:53:50.209081 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 11:53:50 crc kubenswrapper[4922]: I0218 11:53:50.610092 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 11:53:50 crc kubenswrapper[4922]: I0218 11:53:50.611406 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:50 crc kubenswrapper[4922]: I0218 11:53:50.670327 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:53:50 crc kubenswrapper[4922]: I0218 11:53:50.730247 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nhn5n"] Feb 18 11:53:51 crc kubenswrapper[4922]: I0218 11:53:51.260292 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:51 crc kubenswrapper[4922]: I0218 11:53:51.260595 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:51 crc kubenswrapper[4922]: I0218 11:53:51.383479 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:51 crc kubenswrapper[4922]: I0218 11:53:51.524162 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" podUID="30419d6d-3999-43ef-8cd9-07143299061a" containerName="dnsmasq-dns" containerID="cri-o://e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a" gracePeriod=10 Feb 18 11:53:51 crc kubenswrapper[4922]: I0218 11:53:51.612838 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 11:53:51 crc kubenswrapper[4922]: I0218 11:53:51.989019 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.064321 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-ovsdbserver-nb\") pod \"30419d6d-3999-43ef-8cd9-07143299061a\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.064440 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-config\") pod \"30419d6d-3999-43ef-8cd9-07143299061a\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.064525 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d74b5\" (UniqueName: \"kubernetes.io/projected/30419d6d-3999-43ef-8cd9-07143299061a-kube-api-access-d74b5\") pod \"30419d6d-3999-43ef-8cd9-07143299061a\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.064591 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-dns-svc\") pod \"30419d6d-3999-43ef-8cd9-07143299061a\" (UID: \"30419d6d-3999-43ef-8cd9-07143299061a\") " Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.070321 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30419d6d-3999-43ef-8cd9-07143299061a-kube-api-access-d74b5" (OuterVolumeSpecName: "kube-api-access-d74b5") pod "30419d6d-3999-43ef-8cd9-07143299061a" (UID: "30419d6d-3999-43ef-8cd9-07143299061a"). InnerVolumeSpecName "kube-api-access-d74b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.107722 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30419d6d-3999-43ef-8cd9-07143299061a" (UID: "30419d6d-3999-43ef-8cd9-07143299061a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.110024 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30419d6d-3999-43ef-8cd9-07143299061a" (UID: "30419d6d-3999-43ef-8cd9-07143299061a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.118640 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-config" (OuterVolumeSpecName: "config") pod "30419d6d-3999-43ef-8cd9-07143299061a" (UID: "30419d6d-3999-43ef-8cd9-07143299061a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.167399 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.167437 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d74b5\" (UniqueName: \"kubernetes.io/projected/30419d6d-3999-43ef-8cd9-07143299061a-kube-api-access-d74b5\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.167452 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.167466 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30419d6d-3999-43ef-8cd9-07143299061a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.533134 4922 generic.go:334] "Generic (PLEG): container finished" podID="30419d6d-3999-43ef-8cd9-07143299061a" containerID="e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a" exitCode=0 Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.533210 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" event={"ID":"30419d6d-3999-43ef-8cd9-07143299061a","Type":"ContainerDied","Data":"e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a"} Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.533233 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.533258 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nhn5n" event={"ID":"30419d6d-3999-43ef-8cd9-07143299061a","Type":"ContainerDied","Data":"d25632e30548614105c4a5a234e63a61b760dc62618152fb02637520873fb6e0"} Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.533281 4922 scope.go:117] "RemoveContainer" containerID="e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.565449 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nhn5n"] Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.572042 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nhn5n"] Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.763472 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-dd51-account-create-update-fr8ml"] Feb 18 11:53:52 crc kubenswrapper[4922]: E0218 11:53:52.764008 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30419d6d-3999-43ef-8cd9-07143299061a" containerName="dnsmasq-dns" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.764020 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="30419d6d-3999-43ef-8cd9-07143299061a" containerName="dnsmasq-dns" Feb 18 11:53:52 crc kubenswrapper[4922]: E0218 11:53:52.764057 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30419d6d-3999-43ef-8cd9-07143299061a" containerName="init" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.764063 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="30419d6d-3999-43ef-8cd9-07143299061a" containerName="init" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.764197 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="30419d6d-3999-43ef-8cd9-07143299061a" containerName="dnsmasq-dns" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.768032 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.770650 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.775124 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-dd51-account-create-update-fr8ml"] Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.808925 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-499z7"] Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.809841 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-499z7" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.822501 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-499z7"] Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.876844 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmlzq\" (UniqueName: \"kubernetes.io/projected/811ffd65-f5dc-44a3-a1cb-778937ca9771-kube-api-access-vmlzq\") pod \"keystone-dd51-account-create-update-fr8ml\" (UID: \"811ffd65-f5dc-44a3-a1cb-778937ca9771\") " pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.876940 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811ffd65-f5dc-44a3-a1cb-778937ca9771-operator-scripts\") pod \"keystone-dd51-account-create-update-fr8ml\" (UID: \"811ffd65-f5dc-44a3-a1cb-778937ca9771\") " pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.979206 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3c9160-dd6d-4591-9554-d3c74df3a64e-operator-scripts\") pod \"keystone-db-create-499z7\" (UID: \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\") " pod="openstack/keystone-db-create-499z7" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.979261 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmlzq\" (UniqueName: \"kubernetes.io/projected/811ffd65-f5dc-44a3-a1cb-778937ca9771-kube-api-access-vmlzq\") pod \"keystone-dd51-account-create-update-fr8ml\" (UID: \"811ffd65-f5dc-44a3-a1cb-778937ca9771\") " pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.979455 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811ffd65-f5dc-44a3-a1cb-778937ca9771-operator-scripts\") pod \"keystone-dd51-account-create-update-fr8ml\" (UID: \"811ffd65-f5dc-44a3-a1cb-778937ca9771\") " pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.979723 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ps9\" (UniqueName: \"kubernetes.io/projected/0d3c9160-dd6d-4591-9554-d3c74df3a64e-kube-api-access-m8ps9\") pod \"keystone-db-create-499z7\" (UID: \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\") " pod="openstack/keystone-db-create-499z7" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.980435 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811ffd65-f5dc-44a3-a1cb-778937ca9771-operator-scripts\") pod \"keystone-dd51-account-create-update-fr8ml\" (UID: \"811ffd65-f5dc-44a3-a1cb-778937ca9771\") " pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:53:52 crc kubenswrapper[4922]: I0218 11:53:52.985061 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30419d6d-3999-43ef-8cd9-07143299061a" path="/var/lib/kubelet/pods/30419d6d-3999-43ef-8cd9-07143299061a/volumes" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.013167 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2zkj4"] Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.014950 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2zkj4" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.022057 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2zkj4"] Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.022482 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmlzq\" (UniqueName: \"kubernetes.io/projected/811ffd65-f5dc-44a3-a1cb-778937ca9771-kube-api-access-vmlzq\") pod \"keystone-dd51-account-create-update-fr8ml\" (UID: \"811ffd65-f5dc-44a3-a1cb-778937ca9771\") " pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.081918 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8ps9\" (UniqueName: \"kubernetes.io/projected/0d3c9160-dd6d-4591-9554-d3c74df3a64e-kube-api-access-m8ps9\") pod \"keystone-db-create-499z7\" (UID: \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\") " pod="openstack/keystone-db-create-499z7" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.082024 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3c9160-dd6d-4591-9554-d3c74df3a64e-operator-scripts\") pod \"keystone-db-create-499z7\" (UID: \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\") " pod="openstack/keystone-db-create-499z7" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.082909 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3c9160-dd6d-4591-9554-d3c74df3a64e-operator-scripts\") pod \"keystone-db-create-499z7\" (UID: \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\") " pod="openstack/keystone-db-create-499z7" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.088748 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.109253 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8ps9\" (UniqueName: \"kubernetes.io/projected/0d3c9160-dd6d-4591-9554-d3c74df3a64e-kube-api-access-m8ps9\") pod \"keystone-db-create-499z7\" (UID: \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\") " pod="openstack/keystone-db-create-499z7" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.128165 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-499z7" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.131615 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a3b1-account-create-update-5qfd8"] Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.133029 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.135282 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.144179 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a3b1-account-create-update-5qfd8"] Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.187911 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05f03ea4-2462-4f2c-b9b8-395fc9802993-operator-scripts\") pod \"placement-db-create-2zkj4\" (UID: \"05f03ea4-2462-4f2c-b9b8-395fc9802993\") " pod="openstack/placement-db-create-2zkj4" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.188019 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvv6\" (UniqueName: \"kubernetes.io/projected/05f03ea4-2462-4f2c-b9b8-395fc9802993-kube-api-access-6jvv6\") pod \"placement-db-create-2zkj4\" (UID: \"05f03ea4-2462-4f2c-b9b8-395fc9802993\") " pod="openstack/placement-db-create-2zkj4" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.289117 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85eec6a5-292b-4061-bb90-18904535d9cc-operator-scripts\") pod \"placement-a3b1-account-create-update-5qfd8\" (UID: \"85eec6a5-292b-4061-bb90-18904535d9cc\") " pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.289187 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05f03ea4-2462-4f2c-b9b8-395fc9802993-operator-scripts\") pod \"placement-db-create-2zkj4\" (UID: \"05f03ea4-2462-4f2c-b9b8-395fc9802993\") " pod="openstack/placement-db-create-2zkj4" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.289267 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jvv6\" (UniqueName: \"kubernetes.io/projected/05f03ea4-2462-4f2c-b9b8-395fc9802993-kube-api-access-6jvv6\") pod \"placement-db-create-2zkj4\" (UID: \"05f03ea4-2462-4f2c-b9b8-395fc9802993\") " pod="openstack/placement-db-create-2zkj4" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.289309 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlnd\" (UniqueName: \"kubernetes.io/projected/85eec6a5-292b-4061-bb90-18904535d9cc-kube-api-access-mqlnd\") pod \"placement-a3b1-account-create-update-5qfd8\" (UID: \"85eec6a5-292b-4061-bb90-18904535d9cc\") " pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.290062 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05f03ea4-2462-4f2c-b9b8-395fc9802993-operator-scripts\") pod \"placement-db-create-2zkj4\" (UID: \"05f03ea4-2462-4f2c-b9b8-395fc9802993\") " pod="openstack/placement-db-create-2zkj4" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.303825 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jvv6\" (UniqueName: \"kubernetes.io/projected/05f03ea4-2462-4f2c-b9b8-395fc9802993-kube-api-access-6jvv6\") pod \"placement-db-create-2zkj4\" (UID: \"05f03ea4-2462-4f2c-b9b8-395fc9802993\") " pod="openstack/placement-db-create-2zkj4" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.361635 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2zkj4" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.390667 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85eec6a5-292b-4061-bb90-18904535d9cc-operator-scripts\") pod \"placement-a3b1-account-create-update-5qfd8\" (UID: \"85eec6a5-292b-4061-bb90-18904535d9cc\") " pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.390856 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqlnd\" (UniqueName: \"kubernetes.io/projected/85eec6a5-292b-4061-bb90-18904535d9cc-kube-api-access-mqlnd\") pod \"placement-a3b1-account-create-update-5qfd8\" (UID: \"85eec6a5-292b-4061-bb90-18904535d9cc\") " pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.391569 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85eec6a5-292b-4061-bb90-18904535d9cc-operator-scripts\") pod \"placement-a3b1-account-create-update-5qfd8\" (UID: \"85eec6a5-292b-4061-bb90-18904535d9cc\") " pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.407828 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqlnd\" (UniqueName: \"kubernetes.io/projected/85eec6a5-292b-4061-bb90-18904535d9cc-kube-api-access-mqlnd\") pod \"placement-a3b1-account-create-update-5qfd8\" (UID: \"85eec6a5-292b-4061-bb90-18904535d9cc\") " pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:53:53 crc kubenswrapper[4922]: I0218 11:53:53.475938 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.165961 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-xj7zt"] Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.167219 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xj7zt" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.176726 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-xj7zt"] Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.283921 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-95b8-account-create-update-58r6z"] Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.285046 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.289965 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.299871 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-95b8-account-create-update-58r6z"] Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.307576 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e854dba-d50f-4228-9b7a-c8a0ae16347a-operator-scripts\") pod \"watcher-db-create-xj7zt\" (UID: \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\") " pod="openstack/watcher-db-create-xj7zt" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.307658 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fft6\" (UniqueName: \"kubernetes.io/projected/3e854dba-d50f-4228-9b7a-c8a0ae16347a-kube-api-access-6fft6\") pod \"watcher-db-create-xj7zt\" (UID: \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\") " pod="openstack/watcher-db-create-xj7zt" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.408579 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-operator-scripts\") pod \"watcher-95b8-account-create-update-58r6z\" (UID: \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\") " pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.408630 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6t58\" (UniqueName: \"kubernetes.io/projected/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-kube-api-access-d6t58\") pod \"watcher-95b8-account-create-update-58r6z\" (UID: \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\") " pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.408676 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e854dba-d50f-4228-9b7a-c8a0ae16347a-operator-scripts\") pod \"watcher-db-create-xj7zt\" (UID: \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\") " pod="openstack/watcher-db-create-xj7zt" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.408909 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fft6\" (UniqueName: \"kubernetes.io/projected/3e854dba-d50f-4228-9b7a-c8a0ae16347a-kube-api-access-6fft6\") pod \"watcher-db-create-xj7zt\" (UID: \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\") " pod="openstack/watcher-db-create-xj7zt" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.409421 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e854dba-d50f-4228-9b7a-c8a0ae16347a-operator-scripts\") pod \"watcher-db-create-xj7zt\" (UID: \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\") " pod="openstack/watcher-db-create-xj7zt" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.433165 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zmcwq"] Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.449325 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fft6\" (UniqueName: \"kubernetes.io/projected/3e854dba-d50f-4228-9b7a-c8a0ae16347a-kube-api-access-6fft6\") pod \"watcher-db-create-xj7zt\" (UID: \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\") " pod="openstack/watcher-db-create-xj7zt" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.461865 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zmcwq"] Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.462138 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.491078 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xj7zt" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.511391 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-operator-scripts\") pod \"watcher-95b8-account-create-update-58r6z\" (UID: \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\") " pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.511477 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6t58\" (UniqueName: \"kubernetes.io/projected/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-kube-api-access-d6t58\") pod \"watcher-95b8-account-create-update-58r6z\" (UID: \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\") " pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.512594 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-operator-scripts\") pod \"watcher-95b8-account-create-update-58r6z\" (UID: \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\") " pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.542450 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6t58\" (UniqueName: \"kubernetes.io/projected/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-kube-api-access-d6t58\") pod \"watcher-95b8-account-create-update-58r6z\" (UID: \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\") " pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.608113 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.612805 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-config\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.612865 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.612907 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.612937 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7r9n\" (UniqueName: \"kubernetes.io/projected/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-kube-api-access-f7r9n\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.613038 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.714431 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.714494 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-config\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.714530 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.714565 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.714591 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7r9n\" (UniqueName: \"kubernetes.io/projected/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-kube-api-access-f7r9n\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.715470 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.715580 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.715632 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-config\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.715773 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.744123 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7r9n\" (UniqueName: \"kubernetes.io/projected/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-kube-api-access-f7r9n\") pod \"dnsmasq-dns-b8fbc5445-zmcwq\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:54 crc kubenswrapper[4922]: I0218 11:53:54.812180 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.608183 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.621955 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.623039 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.625564 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.625745 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.625910 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.626043 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gh6mm" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.731013 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0771bdc1-7622-4a65-aa82-3150630ce652-lock\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.731097 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0771bdc1-7622-4a65-aa82-3150630ce652-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.731140 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn972\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-kube-api-access-jn972\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.731177 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.731260 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.731279 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0771bdc1-7622-4a65-aa82-3150630ce652-cache\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.832376 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.832436 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0771bdc1-7622-4a65-aa82-3150630ce652-cache\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.832488 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0771bdc1-7622-4a65-aa82-3150630ce652-lock\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.832543 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0771bdc1-7622-4a65-aa82-3150630ce652-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.832593 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn972\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-kube-api-access-jn972\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.832636 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.832840 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.832942 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0771bdc1-7622-4a65-aa82-3150630ce652-cache\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: E0218 11:53:55.832851 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 11:53:55 crc kubenswrapper[4922]: E0218 11:53:55.833006 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 11:53:55 crc kubenswrapper[4922]: E0218 11:53:55.833053 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift podName:0771bdc1-7622-4a65-aa82-3150630ce652 nodeName:}" failed. No retries permitted until 2026-02-18 11:53:56.333036173 +0000 UTC m=+1038.060740253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift") pod "swift-storage-0" (UID: "0771bdc1-7622-4a65-aa82-3150630ce652") : configmap "swift-ring-files" not found Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.833057 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0771bdc1-7622-4a65-aa82-3150630ce652-lock\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.848283 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0771bdc1-7622-4a65-aa82-3150630ce652-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.851255 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn972\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-kube-api-access-jn972\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.868247 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.937080 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6gzbs"] Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.938541 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.941460 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.941624 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.941736 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 11:53:55 crc kubenswrapper[4922]: I0218 11:53:55.969844 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6gzbs"] Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.036207 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-dispersionconf\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.036265 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s79m\" (UniqueName: \"kubernetes.io/projected/83fbf909-70fe-4d3c-9b45-3f5a6733779c-kube-api-access-6s79m\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.036292 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83fbf909-70fe-4d3c-9b45-3f5a6733779c-etc-swift\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.036319 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-ring-data-devices\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.036354 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-swiftconf\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.036488 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-scripts\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.036513 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-combined-ca-bundle\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.138315 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-swiftconf\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.138479 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-scripts\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.138500 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-combined-ca-bundle\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.138549 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-dispersionconf\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.138583 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s79m\" (UniqueName: \"kubernetes.io/projected/83fbf909-70fe-4d3c-9b45-3f5a6733779c-kube-api-access-6s79m\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.138610 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83fbf909-70fe-4d3c-9b45-3f5a6733779c-etc-swift\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.138640 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-ring-data-devices\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.139262 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83fbf909-70fe-4d3c-9b45-3f5a6733779c-etc-swift\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.139418 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-ring-data-devices\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.139448 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-scripts\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.142691 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-swiftconf\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.142926 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-dispersionconf\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.150916 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-combined-ca-bundle\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.161936 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s79m\" (UniqueName: \"kubernetes.io/projected/83fbf909-70fe-4d3c-9b45-3f5a6733779c-kube-api-access-6s79m\") pod \"swift-ring-rebalance-6gzbs\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.269488 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.342692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:56 crc kubenswrapper[4922]: E0218 11:53:56.342889 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 11:53:56 crc kubenswrapper[4922]: E0218 11:53:56.342916 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 11:53:56 crc kubenswrapper[4922]: E0218 11:53:56.342975 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift podName:0771bdc1-7622-4a65-aa82-3150630ce652 nodeName:}" failed. No retries permitted until 2026-02-18 11:53:57.342958258 +0000 UTC m=+1039.070662338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift") pod "swift-storage-0" (UID: "0771bdc1-7622-4a65-aa82-3150630ce652") : configmap "swift-ring-files" not found Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.755836 4922 scope.go:117] "RemoveContainer" containerID="60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.795174 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mqx2n"] Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.796293 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mqx2n" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.805931 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mqx2n"] Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.851119 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-operator-scripts\") pod \"glance-db-create-mqx2n\" (UID: \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\") " pod="openstack/glance-db-create-mqx2n" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.851510 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44zdx\" (UniqueName: \"kubernetes.io/projected/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-kube-api-access-44zdx\") pod \"glance-db-create-mqx2n\" (UID: \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\") " pod="openstack/glance-db-create-mqx2n" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.903164 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2714-account-create-update-j5l9f"] Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.906859 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.909897 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.917377 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2714-account-create-update-j5l9f"] Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.953064 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44zdx\" (UniqueName: \"kubernetes.io/projected/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-kube-api-access-44zdx\") pod \"glance-db-create-mqx2n\" (UID: \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\") " pod="openstack/glance-db-create-mqx2n" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.953130 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-operator-scripts\") pod \"glance-db-create-mqx2n\" (UID: \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\") " pod="openstack/glance-db-create-mqx2n" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.953192 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl925\" (UniqueName: \"kubernetes.io/projected/3b15fbe3-8f30-41e8-8897-037694ccb56b-kube-api-access-vl925\") pod \"glance-2714-account-create-update-j5l9f\" (UID: \"3b15fbe3-8f30-41e8-8897-037694ccb56b\") " pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.953211 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b15fbe3-8f30-41e8-8897-037694ccb56b-operator-scripts\") pod \"glance-2714-account-create-update-j5l9f\" (UID: \"3b15fbe3-8f30-41e8-8897-037694ccb56b\") " pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.953914 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-operator-scripts\") pod \"glance-db-create-mqx2n\" (UID: \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\") " pod="openstack/glance-db-create-mqx2n" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.980255 4922 scope.go:117] "RemoveContainer" containerID="e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.987094 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44zdx\" (UniqueName: \"kubernetes.io/projected/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-kube-api-access-44zdx\") pod \"glance-db-create-mqx2n\" (UID: \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\") " pod="openstack/glance-db-create-mqx2n" Feb 18 11:53:56 crc kubenswrapper[4922]: E0218 11:53:56.996279 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a\": container with ID starting with e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a not found: ID does not exist" containerID="e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.996328 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a"} err="failed to get container status \"e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a\": rpc error: code = NotFound desc = could not find container \"e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a\": container with ID starting with e636c73dbb4ceb7b2f26422690f3ab2d91f3f8e5fae643efe99e90bb666ac64a not found: ID does not exist" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.996441 4922 scope.go:117] "RemoveContainer" containerID="60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf" Feb 18 11:53:56 crc kubenswrapper[4922]: E0218 11:53:56.997006 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf\": container with ID starting with 60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf not found: ID does not exist" containerID="60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf" Feb 18 11:53:56 crc kubenswrapper[4922]: I0218 11:53:56.997058 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf"} err="failed to get container status \"60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf\": rpc error: code = NotFound desc = could not find container \"60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf\": container with ID starting with 60eb9c5023e2d3d6c8d592fa62a787bc619c79be6125ae7f37f446b7b4620ecf not found: ID does not exist" Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.055410 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl925\" (UniqueName: \"kubernetes.io/projected/3b15fbe3-8f30-41e8-8897-037694ccb56b-kube-api-access-vl925\") pod \"glance-2714-account-create-update-j5l9f\" (UID: \"3b15fbe3-8f30-41e8-8897-037694ccb56b\") " pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.055448 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b15fbe3-8f30-41e8-8897-037694ccb56b-operator-scripts\") pod \"glance-2714-account-create-update-j5l9f\" (UID: \"3b15fbe3-8f30-41e8-8897-037694ccb56b\") " pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.056536 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b15fbe3-8f30-41e8-8897-037694ccb56b-operator-scripts\") pod \"glance-2714-account-create-update-j5l9f\" (UID: \"3b15fbe3-8f30-41e8-8897-037694ccb56b\") " pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.082912 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl925\" (UniqueName: \"kubernetes.io/projected/3b15fbe3-8f30-41e8-8897-037694ccb56b-kube-api-access-vl925\") pod \"glance-2714-account-create-update-j5l9f\" (UID: \"3b15fbe3-8f30-41e8-8897-037694ccb56b\") " pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.084330 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.117942 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mqx2n" Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.372096 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:57 crc kubenswrapper[4922]: E0218 11:53:57.372559 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 11:53:57 crc kubenswrapper[4922]: E0218 11:53:57.372592 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 11:53:57 crc kubenswrapper[4922]: E0218 11:53:57.372648 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift podName:0771bdc1-7622-4a65-aa82-3150630ce652 nodeName:}" failed. No retries permitted until 2026-02-18 11:53:59.372628826 +0000 UTC m=+1041.100332906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift") pod "swift-storage-0" (UID: "0771bdc1-7622-4a65-aa82-3150630ce652") : configmap "swift-ring-files" not found Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.554804 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zmcwq"] Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.591639 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerStarted","Data":"e96b484d9840abfb2c7757906d574c4d66e64e51bbf1dbb2212180e14faff247"} Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.596249 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2aa305a0-c015-43c2-851c-8eff778238be","Type":"ContainerStarted","Data":"c04aa8b95a26312fcc998c31a3b946f55c4ecd9e671bf072302577040fa7d761"} Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.596614 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.597880 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" event={"ID":"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0","Type":"ContainerStarted","Data":"dd9d9badec862e12fd53c23629f63f90af2fbdcdb63f9f7d603ffed75ff6a6ad"} Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.622168 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.109344084 podStartE2EDuration="54.622150214s" podCreationTimestamp="2026-02-18 11:53:03 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.773223476 +0000 UTC m=+995.500927556" lastFinishedPulling="2026-02-18 11:53:57.286029606 +0000 UTC m=+1039.013733686" observedRunningTime="2026-02-18 11:53:57.616169802 +0000 UTC m=+1039.343873882" watchObservedRunningTime="2026-02-18 11:53:57.622150214 +0000 UTC m=+1039.349854294" Feb 18 11:53:57 crc kubenswrapper[4922]: W0218 11:53:57.680258 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83fbf909_70fe_4d3c_9b45_3f5a6733779c.slice/crio-7cb68b07b7628d48bbfa07f469ab2d1994a576e88de08a8b50b0bca8c09412b9 WatchSource:0}: Error finding container 7cb68b07b7628d48bbfa07f469ab2d1994a576e88de08a8b50b0bca8c09412b9: Status 404 returned error can't find the container with id 7cb68b07b7628d48bbfa07f469ab2d1994a576e88de08a8b50b0bca8c09412b9 Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.688167 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-xj7zt"] Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.694553 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6gzbs"] Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.702439 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-499z7"] Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.854201 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-95b8-account-create-update-58r6z"] Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.865755 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2714-account-create-update-j5l9f"] Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.874452 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-dd51-account-create-update-fr8ml"] Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.882784 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2zkj4"] Feb 18 11:53:57 crc kubenswrapper[4922]: I0218 11:53:57.906869 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a3b1-account-create-update-5qfd8"] Feb 18 11:53:57 crc kubenswrapper[4922]: W0218 11:53:57.965832 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85eec6a5_292b_4061_bb90_18904535d9cc.slice/crio-099bd4d1700ca1fdea66438e1c36747fd163667b51fd5545201fba20a6a21ddf WatchSource:0}: Error finding container 099bd4d1700ca1fdea66438e1c36747fd163667b51fd5545201fba20a6a21ddf: Status 404 returned error can't find the container with id 099bd4d1700ca1fdea66438e1c36747fd163667b51fd5545201fba20a6a21ddf Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.039296 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mqx2n"] Feb 18 11:53:58 crc kubenswrapper[4922]: W0218 11:53:58.066914 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcac3a541_a2f7_4d95_97ff_1361fbd3e81e.slice/crio-35bfb15c61c9106211ef6e4dae348b65fb77a6409596090df5a648a4b7682f47 WatchSource:0}: Error finding container 35bfb15c61c9106211ef6e4dae348b65fb77a6409596090df5a648a4b7682f47: Status 404 returned error can't find the container with id 35bfb15c61c9106211ef6e4dae348b65fb77a6409596090df5a648a4b7682f47 Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.607891 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-95b8-account-create-update-58r6z" event={"ID":"cc452273-8a5f-47d8-8aa5-1ddfe2240e28","Type":"ContainerStarted","Data":"54e9125c24a959588989fe6a7b334775970a6c4b231353eee75179d5fb3c2947"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.610832 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a3b1-account-create-update-5qfd8" event={"ID":"85eec6a5-292b-4061-bb90-18904535d9cc","Type":"ContainerStarted","Data":"099bd4d1700ca1fdea66438e1c36747fd163667b51fd5545201fba20a6a21ddf"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.612438 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6gzbs" event={"ID":"83fbf909-70fe-4d3c-9b45-3f5a6733779c","Type":"ContainerStarted","Data":"7cb68b07b7628d48bbfa07f469ab2d1994a576e88de08a8b50b0bca8c09412b9"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.616172 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xj7zt" event={"ID":"3e854dba-d50f-4228-9b7a-c8a0ae16347a","Type":"ContainerStarted","Data":"d21236550250b2958c4054030789e8473894e0d1d7c3c12b25573ed942732eed"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.616235 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xj7zt" event={"ID":"3e854dba-d50f-4228-9b7a-c8a0ae16347a","Type":"ContainerStarted","Data":"c68fcfcd006d47497315707b33d56f34856ae845dcd2357d6a67318d6da6c7f6"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.621107 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mqx2n" event={"ID":"cac3a541-a2f7-4d95-97ff-1361fbd3e81e","Type":"ContainerStarted","Data":"35bfb15c61c9106211ef6e4dae348b65fb77a6409596090df5a648a4b7682f47"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.629090 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerID="95d55b3314d502d1433cbc46f43ce2797b326e2911e0feadc7c77b360ebeb491" exitCode=0 Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.629271 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" event={"ID":"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0","Type":"ContainerDied","Data":"95d55b3314d502d1433cbc46f43ce2797b326e2911e0feadc7c77b360ebeb491"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.634637 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-xj7zt" podStartSLOduration=4.634612195 podStartE2EDuration="4.634612195s" podCreationTimestamp="2026-02-18 11:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:58.63324629 +0000 UTC m=+1040.360950370" watchObservedRunningTime="2026-02-18 11:53:58.634612195 +0000 UTC m=+1040.362316275" Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.639409 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2zkj4" event={"ID":"05f03ea4-2462-4f2c-b9b8-395fc9802993","Type":"ContainerStarted","Data":"2a299295394b49d8734504ca223eb46c67dbed1ff8a13151afb9f72e374ec15e"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.647503 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dd51-account-create-update-fr8ml" event={"ID":"811ffd65-f5dc-44a3-a1cb-778937ca9771","Type":"ContainerStarted","Data":"80ee7e9b0cec62a50bee70889b18acf5fa94b2334f321fee43b9ca55b8bd52cc"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.674168 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2714-account-create-update-j5l9f" event={"ID":"3b15fbe3-8f30-41e8-8897-037694ccb56b","Type":"ContainerStarted","Data":"b02d51eb3381fc4834bb097a9321f3a4baf43667d320279cf5100f69846caf84"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.678410 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-499z7" event={"ID":"0d3c9160-dd6d-4591-9554-d3c74df3a64e","Type":"ContainerStarted","Data":"30c5754ee779c0d183f858e5000b674deb35f4da6cc4bd94f8c25db1f475b7bc"} Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.722017 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hz64r"] Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.723582 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hz64r" Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.726296 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.729117 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hz64r"] Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.802257 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82871f18-4432-42c4-bbfd-61ff507a1e95-operator-scripts\") pod \"root-account-create-update-hz64r\" (UID: \"82871f18-4432-42c4-bbfd-61ff507a1e95\") " pod="openstack/root-account-create-update-hz64r" Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.802462 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpvzn\" (UniqueName: \"kubernetes.io/projected/82871f18-4432-42c4-bbfd-61ff507a1e95-kube-api-access-bpvzn\") pod \"root-account-create-update-hz64r\" (UID: \"82871f18-4432-42c4-bbfd-61ff507a1e95\") " pod="openstack/root-account-create-update-hz64r" Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.904391 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpvzn\" (UniqueName: \"kubernetes.io/projected/82871f18-4432-42c4-bbfd-61ff507a1e95-kube-api-access-bpvzn\") pod \"root-account-create-update-hz64r\" (UID: \"82871f18-4432-42c4-bbfd-61ff507a1e95\") " pod="openstack/root-account-create-update-hz64r" Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.904474 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82871f18-4432-42c4-bbfd-61ff507a1e95-operator-scripts\") pod \"root-account-create-update-hz64r\" (UID: \"82871f18-4432-42c4-bbfd-61ff507a1e95\") " pod="openstack/root-account-create-update-hz64r" Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.905180 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82871f18-4432-42c4-bbfd-61ff507a1e95-operator-scripts\") pod \"root-account-create-update-hz64r\" (UID: \"82871f18-4432-42c4-bbfd-61ff507a1e95\") " pod="openstack/root-account-create-update-hz64r" Feb 18 11:53:58 crc kubenswrapper[4922]: I0218 11:53:58.925448 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpvzn\" (UniqueName: \"kubernetes.io/projected/82871f18-4432-42c4-bbfd-61ff507a1e95-kube-api-access-bpvzn\") pod \"root-account-create-update-hz64r\" (UID: \"82871f18-4432-42c4-bbfd-61ff507a1e95\") " pod="openstack/root-account-create-update-hz64r" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.097674 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hz64r" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.438409 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:53:59 crc kubenswrapper[4922]: E0218 11:53:59.438588 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 11:53:59 crc kubenswrapper[4922]: E0218 11:53:59.438928 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 11:53:59 crc kubenswrapper[4922]: E0218 11:53:59.438995 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift podName:0771bdc1-7622-4a65-aa82-3150630ce652 nodeName:}" failed. No retries permitted until 2026-02-18 11:54:03.438972679 +0000 UTC m=+1045.166676759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift") pod "swift-storage-0" (UID: "0771bdc1-7622-4a65-aa82-3150630ce652") : configmap "swift-ring-files" not found Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.688243 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2714-account-create-update-j5l9f" event={"ID":"3b15fbe3-8f30-41e8-8897-037694ccb56b","Type":"ContainerStarted","Data":"fedb74d1b6b4c2f0bd2aef2df515af61a8cc0caff9248cf99b996d4ac610fc62"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.691688 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" event={"ID":"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0","Type":"ContainerStarted","Data":"9c3a3f840bf392e642ec5f6d88074e14f3634fce3f5fe950f1ca6ef37f5d1bd5"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.691861 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.693668 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-95b8-account-create-update-58r6z" event={"ID":"cc452273-8a5f-47d8-8aa5-1ddfe2240e28","Type":"ContainerStarted","Data":"a445d9cdaa7af5fce54eb1734c1edd288e3e96655dfca8e174ddcae4e353e89d"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.695850 4922 generic.go:334] "Generic (PLEG): container finished" podID="3e854dba-d50f-4228-9b7a-c8a0ae16347a" containerID="d21236550250b2958c4054030789e8473894e0d1d7c3c12b25573ed942732eed" exitCode=0 Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.695953 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xj7zt" event={"ID":"3e854dba-d50f-4228-9b7a-c8a0ae16347a","Type":"ContainerDied","Data":"d21236550250b2958c4054030789e8473894e0d1d7c3c12b25573ed942732eed"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.698097 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dd51-account-create-update-fr8ml" event={"ID":"811ffd65-f5dc-44a3-a1cb-778937ca9771","Type":"ContainerStarted","Data":"e6310d509175d11956cf35f2881cac092138c8da93939b01d64f0006c350cdcc"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.703425 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mqx2n" event={"ID":"cac3a541-a2f7-4d95-97ff-1361fbd3e81e","Type":"ContainerStarted","Data":"e7a3b382cba61f7101e30bbe239f4c456f983c4335448f6fab20d5e184472620"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.705802 4922 generic.go:334] "Generic (PLEG): container finished" podID="0d3c9160-dd6d-4591-9554-d3c74df3a64e" containerID="38f62ebe43eed17090600fd985ab87c725adb4a3b86d21051e6be95923794e24" exitCode=0 Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.705857 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-499z7" event={"ID":"0d3c9160-dd6d-4591-9554-d3c74df3a64e","Type":"ContainerDied","Data":"38f62ebe43eed17090600fd985ab87c725adb4a3b86d21051e6be95923794e24"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.707493 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2zkj4" event={"ID":"05f03ea4-2462-4f2c-b9b8-395fc9802993","Type":"ContainerStarted","Data":"c34444b452577c8ce28445ed13810cd08c00c27fb0fdc17083d4a2fd0f78af8b"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.707981 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2714-account-create-update-j5l9f" podStartSLOduration=3.707971603 podStartE2EDuration="3.707971603s" podCreationTimestamp="2026-02-18 11:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:59.700908834 +0000 UTC m=+1041.428612934" watchObservedRunningTime="2026-02-18 11:53:59.707971603 +0000 UTC m=+1041.435675673" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.710339 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a3b1-account-create-update-5qfd8" event={"ID":"85eec6a5-292b-4061-bb90-18904535d9cc","Type":"ContainerStarted","Data":"cca6d9a35f7a3b90b18a7c03d2317544897c0eada4e7ac4555360fd3747c2a53"} Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.721336 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-dd51-account-create-update-fr8ml" podStartSLOduration=7.721316602 podStartE2EDuration="7.721316602s" podCreationTimestamp="2026-02-18 11:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:59.712400986 +0000 UTC m=+1041.440105076" watchObservedRunningTime="2026-02-18 11:53:59.721316602 +0000 UTC m=+1041.449020692" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.737775 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" podStartSLOduration=5.73775932 podStartE2EDuration="5.73775932s" podCreationTimestamp="2026-02-18 11:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:59.733301477 +0000 UTC m=+1041.461005567" watchObservedRunningTime="2026-02-18 11:53:59.73775932 +0000 UTC m=+1041.465463400" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.773897 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-95b8-account-create-update-58r6z" podStartSLOduration=5.773878548 podStartE2EDuration="5.773878548s" podCreationTimestamp="2026-02-18 11:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:59.768387508 +0000 UTC m=+1041.496091588" watchObservedRunningTime="2026-02-18 11:53:59.773878548 +0000 UTC m=+1041.501582628" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.786493 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-2zkj4" podStartSLOduration=7.786475618 podStartE2EDuration="7.786475618s" podCreationTimestamp="2026-02-18 11:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:59.781280306 +0000 UTC m=+1041.508984386" watchObservedRunningTime="2026-02-18 11:53:59.786475618 +0000 UTC m=+1041.514179698" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.823315 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-mqx2n" podStartSLOduration=3.8232972629999997 podStartE2EDuration="3.823297263s" podCreationTimestamp="2026-02-18 11:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:59.814637543 +0000 UTC m=+1041.542341623" watchObservedRunningTime="2026-02-18 11:53:59.823297263 +0000 UTC m=+1041.551001343" Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.837934 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hz64r"] Feb 18 11:53:59 crc kubenswrapper[4922]: I0218 11:53:59.840627 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-a3b1-account-create-update-5qfd8" podStartSLOduration=6.840606543 podStartE2EDuration="6.840606543s" podCreationTimestamp="2026-02-18 11:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:59.827923571 +0000 UTC m=+1041.555627671" watchObservedRunningTime="2026-02-18 11:53:59.840606543 +0000 UTC m=+1041.568310623" Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.719798 4922 generic.go:334] "Generic (PLEG): container finished" podID="cc452273-8a5f-47d8-8aa5-1ddfe2240e28" containerID="a445d9cdaa7af5fce54eb1734c1edd288e3e96655dfca8e174ddcae4e353e89d" exitCode=0 Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.719894 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-95b8-account-create-update-58r6z" event={"ID":"cc452273-8a5f-47d8-8aa5-1ddfe2240e28","Type":"ContainerDied","Data":"a445d9cdaa7af5fce54eb1734c1edd288e3e96655dfca8e174ddcae4e353e89d"} Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.722213 4922 generic.go:334] "Generic (PLEG): container finished" podID="85eec6a5-292b-4061-bb90-18904535d9cc" containerID="cca6d9a35f7a3b90b18a7c03d2317544897c0eada4e7ac4555360fd3747c2a53" exitCode=0 Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.722267 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a3b1-account-create-update-5qfd8" event={"ID":"85eec6a5-292b-4061-bb90-18904535d9cc","Type":"ContainerDied","Data":"cca6d9a35f7a3b90b18a7c03d2317544897c0eada4e7ac4555360fd3747c2a53"} Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.723829 4922 generic.go:334] "Generic (PLEG): container finished" podID="05f03ea4-2462-4f2c-b9b8-395fc9802993" containerID="c34444b452577c8ce28445ed13810cd08c00c27fb0fdc17083d4a2fd0f78af8b" exitCode=0 Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.723900 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2zkj4" event={"ID":"05f03ea4-2462-4f2c-b9b8-395fc9802993","Type":"ContainerDied","Data":"c34444b452577c8ce28445ed13810cd08c00c27fb0fdc17083d4a2fd0f78af8b"} Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.726389 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerStarted","Data":"3f427700514946e216712d6593eedc4f63a171beb78d1aacdd21101e9bbb6fd9"} Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.728083 4922 generic.go:334] "Generic (PLEG): container finished" podID="811ffd65-f5dc-44a3-a1cb-778937ca9771" containerID="e6310d509175d11956cf35f2881cac092138c8da93939b01d64f0006c350cdcc" exitCode=0 Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.728141 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dd51-account-create-update-fr8ml" event={"ID":"811ffd65-f5dc-44a3-a1cb-778937ca9771","Type":"ContainerDied","Data":"e6310d509175d11956cf35f2881cac092138c8da93939b01d64f0006c350cdcc"} Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.729671 4922 generic.go:334] "Generic (PLEG): container finished" podID="cac3a541-a2f7-4d95-97ff-1361fbd3e81e" containerID="e7a3b382cba61f7101e30bbe239f4c456f983c4335448f6fab20d5e184472620" exitCode=0 Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.729721 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mqx2n" event={"ID":"cac3a541-a2f7-4d95-97ff-1361fbd3e81e","Type":"ContainerDied","Data":"e7a3b382cba61f7101e30bbe239f4c456f983c4335448f6fab20d5e184472620"} Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.736050 4922 generic.go:334] "Generic (PLEG): container finished" podID="3b15fbe3-8f30-41e8-8897-037694ccb56b" containerID="fedb74d1b6b4c2f0bd2aef2df515af61a8cc0caff9248cf99b996d4ac610fc62" exitCode=0 Feb 18 11:54:00 crc kubenswrapper[4922]: I0218 11:54:00.736103 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2714-account-create-update-j5l9f" event={"ID":"3b15fbe3-8f30-41e8-8897-037694ccb56b","Type":"ContainerDied","Data":"fedb74d1b6b4c2f0bd2aef2df515af61a8cc0caff9248cf99b996d4ac610fc62"} Feb 18 11:54:02 crc kubenswrapper[4922]: W0218 11:54:02.396409 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82871f18_4432_42c4_bbfd_61ff507a1e95.slice/crio-3b32203e1127c8bf7749c5d652a55547d30ff0cbc6a4eb374405868fb79465cd WatchSource:0}: Error finding container 3b32203e1127c8bf7749c5d652a55547d30ff0cbc6a4eb374405868fb79465cd: Status 404 returned error can't find the container with id 3b32203e1127c8bf7749c5d652a55547d30ff0cbc6a4eb374405868fb79465cd Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.521464 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mqx2n" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.529942 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2zkj4" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.538078 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.575002 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xj7zt" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.585676 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.598510 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-499z7" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.607079 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jvv6\" (UniqueName: \"kubernetes.io/projected/05f03ea4-2462-4f2c-b9b8-395fc9802993-kube-api-access-6jvv6\") pod \"05f03ea4-2462-4f2c-b9b8-395fc9802993\" (UID: \"05f03ea4-2462-4f2c-b9b8-395fc9802993\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.607302 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-operator-scripts\") pod \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\" (UID: \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.607516 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-operator-scripts\") pod \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\" (UID: \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.608018 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.607964 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc452273-8a5f-47d8-8aa5-1ddfe2240e28" (UID: "cc452273-8a5f-47d8-8aa5-1ddfe2240e28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.608002 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cac3a541-a2f7-4d95-97ff-1361fbd3e81e" (UID: "cac3a541-a2f7-4d95-97ff-1361fbd3e81e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.608273 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44zdx\" (UniqueName: \"kubernetes.io/projected/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-kube-api-access-44zdx\") pod \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\" (UID: \"cac3a541-a2f7-4d95-97ff-1361fbd3e81e\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.608393 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05f03ea4-2462-4f2c-b9b8-395fc9802993-operator-scripts\") pod \"05f03ea4-2462-4f2c-b9b8-395fc9802993\" (UID: \"05f03ea4-2462-4f2c-b9b8-395fc9802993\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.608505 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6t58\" (UniqueName: \"kubernetes.io/projected/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-kube-api-access-d6t58\") pod \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\" (UID: \"cc452273-8a5f-47d8-8aa5-1ddfe2240e28\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.608992 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f03ea4-2462-4f2c-b9b8-395fc9802993-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05f03ea4-2462-4f2c-b9b8-395fc9802993" (UID: "05f03ea4-2462-4f2c-b9b8-395fc9802993"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.610875 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.611007 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.611070 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05f03ea4-2462-4f2c-b9b8-395fc9802993-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.616254 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-kube-api-access-44zdx" (OuterVolumeSpecName: "kube-api-access-44zdx") pod "cac3a541-a2f7-4d95-97ff-1361fbd3e81e" (UID: "cac3a541-a2f7-4d95-97ff-1361fbd3e81e"). InnerVolumeSpecName "kube-api-access-44zdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.618013 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f03ea4-2462-4f2c-b9b8-395fc9802993-kube-api-access-6jvv6" (OuterVolumeSpecName: "kube-api-access-6jvv6") pod "05f03ea4-2462-4f2c-b9b8-395fc9802993" (UID: "05f03ea4-2462-4f2c-b9b8-395fc9802993"). InnerVolumeSpecName "kube-api-access-6jvv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.620647 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.627924 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-kube-api-access-d6t58" (OuterVolumeSpecName: "kube-api-access-d6t58") pod "cc452273-8a5f-47d8-8aa5-1ddfe2240e28" (UID: "cc452273-8a5f-47d8-8aa5-1ddfe2240e28"). InnerVolumeSpecName "kube-api-access-d6t58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.712267 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqlnd\" (UniqueName: \"kubernetes.io/projected/85eec6a5-292b-4061-bb90-18904535d9cc-kube-api-access-mqlnd\") pod \"85eec6a5-292b-4061-bb90-18904535d9cc\" (UID: \"85eec6a5-292b-4061-bb90-18904535d9cc\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.712638 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8ps9\" (UniqueName: \"kubernetes.io/projected/0d3c9160-dd6d-4591-9554-d3c74df3a64e-kube-api-access-m8ps9\") pod \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\" (UID: \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.712735 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811ffd65-f5dc-44a3-a1cb-778937ca9771-operator-scripts\") pod \"811ffd65-f5dc-44a3-a1cb-778937ca9771\" (UID: \"811ffd65-f5dc-44a3-a1cb-778937ca9771\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.712814 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmlzq\" (UniqueName: \"kubernetes.io/projected/811ffd65-f5dc-44a3-a1cb-778937ca9771-kube-api-access-vmlzq\") pod \"811ffd65-f5dc-44a3-a1cb-778937ca9771\" (UID: \"811ffd65-f5dc-44a3-a1cb-778937ca9771\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.712890 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl925\" (UniqueName: \"kubernetes.io/projected/3b15fbe3-8f30-41e8-8897-037694ccb56b-kube-api-access-vl925\") pod \"3b15fbe3-8f30-41e8-8897-037694ccb56b\" (UID: \"3b15fbe3-8f30-41e8-8897-037694ccb56b\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.712960 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e854dba-d50f-4228-9b7a-c8a0ae16347a-operator-scripts\") pod \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\" (UID: \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.713043 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b15fbe3-8f30-41e8-8897-037694ccb56b-operator-scripts\") pod \"3b15fbe3-8f30-41e8-8897-037694ccb56b\" (UID: \"3b15fbe3-8f30-41e8-8897-037694ccb56b\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.713136 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3c9160-dd6d-4591-9554-d3c74df3a64e-operator-scripts\") pod \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\" (UID: \"0d3c9160-dd6d-4591-9554-d3c74df3a64e\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.713244 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fft6\" (UniqueName: \"kubernetes.io/projected/3e854dba-d50f-4228-9b7a-c8a0ae16347a-kube-api-access-6fft6\") pod \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\" (UID: \"3e854dba-d50f-4228-9b7a-c8a0ae16347a\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.713398 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85eec6a5-292b-4061-bb90-18904535d9cc-operator-scripts\") pod \"85eec6a5-292b-4061-bb90-18904535d9cc\" (UID: \"85eec6a5-292b-4061-bb90-18904535d9cc\") " Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.713679 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/811ffd65-f5dc-44a3-a1cb-778937ca9771-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "811ffd65-f5dc-44a3-a1cb-778937ca9771" (UID: "811ffd65-f5dc-44a3-a1cb-778937ca9771"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.713885 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6t58\" (UniqueName: \"kubernetes.io/projected/cc452273-8a5f-47d8-8aa5-1ddfe2240e28-kube-api-access-d6t58\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.713954 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jvv6\" (UniqueName: \"kubernetes.io/projected/05f03ea4-2462-4f2c-b9b8-395fc9802993-kube-api-access-6jvv6\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.714016 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811ffd65-f5dc-44a3-a1cb-778937ca9771-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.714075 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44zdx\" (UniqueName: \"kubernetes.io/projected/cac3a541-a2f7-4d95-97ff-1361fbd3e81e-kube-api-access-44zdx\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.714091 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b15fbe3-8f30-41e8-8897-037694ccb56b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b15fbe3-8f30-41e8-8897-037694ccb56b" (UID: "3b15fbe3-8f30-41e8-8897-037694ccb56b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.714727 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e854dba-d50f-4228-9b7a-c8a0ae16347a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e854dba-d50f-4228-9b7a-c8a0ae16347a" (UID: "3e854dba-d50f-4228-9b7a-c8a0ae16347a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.715420 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85eec6a5-292b-4061-bb90-18904535d9cc-kube-api-access-mqlnd" (OuterVolumeSpecName: "kube-api-access-mqlnd") pod "85eec6a5-292b-4061-bb90-18904535d9cc" (UID: "85eec6a5-292b-4061-bb90-18904535d9cc"). InnerVolumeSpecName "kube-api-access-mqlnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.715822 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3c9160-dd6d-4591-9554-d3c74df3a64e-kube-api-access-m8ps9" (OuterVolumeSpecName: "kube-api-access-m8ps9") pod "0d3c9160-dd6d-4591-9554-d3c74df3a64e" (UID: "0d3c9160-dd6d-4591-9554-d3c74df3a64e"). InnerVolumeSpecName "kube-api-access-m8ps9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.715902 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d3c9160-dd6d-4591-9554-d3c74df3a64e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d3c9160-dd6d-4591-9554-d3c74df3a64e" (UID: "0d3c9160-dd6d-4591-9554-d3c74df3a64e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.716160 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85eec6a5-292b-4061-bb90-18904535d9cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85eec6a5-292b-4061-bb90-18904535d9cc" (UID: "85eec6a5-292b-4061-bb90-18904535d9cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.716461 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b15fbe3-8f30-41e8-8897-037694ccb56b-kube-api-access-vl925" (OuterVolumeSpecName: "kube-api-access-vl925") pod "3b15fbe3-8f30-41e8-8897-037694ccb56b" (UID: "3b15fbe3-8f30-41e8-8897-037694ccb56b"). InnerVolumeSpecName "kube-api-access-vl925". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.718991 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e854dba-d50f-4228-9b7a-c8a0ae16347a-kube-api-access-6fft6" (OuterVolumeSpecName: "kube-api-access-6fft6") pod "3e854dba-d50f-4228-9b7a-c8a0ae16347a" (UID: "3e854dba-d50f-4228-9b7a-c8a0ae16347a"). InnerVolumeSpecName "kube-api-access-6fft6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.720814 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811ffd65-f5dc-44a3-a1cb-778937ca9771-kube-api-access-vmlzq" (OuterVolumeSpecName: "kube-api-access-vmlzq") pod "811ffd65-f5dc-44a3-a1cb-778937ca9771" (UID: "811ffd65-f5dc-44a3-a1cb-778937ca9771"). InnerVolumeSpecName "kube-api-access-vmlzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.763565 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-95b8-account-create-update-58r6z" event={"ID":"cc452273-8a5f-47d8-8aa5-1ddfe2240e28","Type":"ContainerDied","Data":"54e9125c24a959588989fe6a7b334775970a6c4b231353eee75179d5fb3c2947"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.763609 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e9125c24a959588989fe6a7b334775970a6c4b231353eee75179d5fb3c2947" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.763702 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-95b8-account-create-update-58r6z" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.765106 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xj7zt" event={"ID":"3e854dba-d50f-4228-9b7a-c8a0ae16347a","Type":"ContainerDied","Data":"c68fcfcd006d47497315707b33d56f34856ae845dcd2357d6a67318d6da6c7f6"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.765163 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c68fcfcd006d47497315707b33d56f34856ae845dcd2357d6a67318d6da6c7f6" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.765265 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xj7zt" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.768247 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hz64r" event={"ID":"82871f18-4432-42c4-bbfd-61ff507a1e95","Type":"ContainerStarted","Data":"3b32203e1127c8bf7749c5d652a55547d30ff0cbc6a4eb374405868fb79465cd"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.770155 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-dd51-account-create-update-fr8ml" event={"ID":"811ffd65-f5dc-44a3-a1cb-778937ca9771","Type":"ContainerDied","Data":"80ee7e9b0cec62a50bee70889b18acf5fa94b2334f321fee43b9ca55b8bd52cc"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.770214 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ee7e9b0cec62a50bee70889b18acf5fa94b2334f321fee43b9ca55b8bd52cc" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.770534 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-dd51-account-create-update-fr8ml" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.774639 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2714-account-create-update-j5l9f" event={"ID":"3b15fbe3-8f30-41e8-8897-037694ccb56b","Type":"ContainerDied","Data":"b02d51eb3381fc4834bb097a9321f3a4baf43667d320279cf5100f69846caf84"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.774677 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b02d51eb3381fc4834bb097a9321f3a4baf43667d320279cf5100f69846caf84" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.775013 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2714-account-create-update-j5l9f" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.780884 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a3b1-account-create-update-5qfd8" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.780900 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a3b1-account-create-update-5qfd8" event={"ID":"85eec6a5-292b-4061-bb90-18904535d9cc","Type":"ContainerDied","Data":"099bd4d1700ca1fdea66438e1c36747fd163667b51fd5545201fba20a6a21ddf"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.781184 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="099bd4d1700ca1fdea66438e1c36747fd163667b51fd5545201fba20a6a21ddf" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.782714 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-499z7" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.782754 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-499z7" event={"ID":"0d3c9160-dd6d-4591-9554-d3c74df3a64e","Type":"ContainerDied","Data":"30c5754ee779c0d183f858e5000b674deb35f4da6cc4bd94f8c25db1f475b7bc"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.782776 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30c5754ee779c0d183f858e5000b674deb35f4da6cc4bd94f8c25db1f475b7bc" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.784906 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2zkj4" event={"ID":"05f03ea4-2462-4f2c-b9b8-395fc9802993","Type":"ContainerDied","Data":"2a299295394b49d8734504ca223eb46c67dbed1ff8a13151afb9f72e374ec15e"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.784931 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a299295394b49d8734504ca223eb46c67dbed1ff8a13151afb9f72e374ec15e" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.784977 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2zkj4" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.789586 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mqx2n" event={"ID":"cac3a541-a2f7-4d95-97ff-1361fbd3e81e","Type":"ContainerDied","Data":"35bfb15c61c9106211ef6e4dae348b65fb77a6409596090df5a648a4b7682f47"} Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.789632 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35bfb15c61c9106211ef6e4dae348b65fb77a6409596090df5a648a4b7682f47" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.789700 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mqx2n" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.816628 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8ps9\" (UniqueName: \"kubernetes.io/projected/0d3c9160-dd6d-4591-9554-d3c74df3a64e-kube-api-access-m8ps9\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.817499 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmlzq\" (UniqueName: \"kubernetes.io/projected/811ffd65-f5dc-44a3-a1cb-778937ca9771-kube-api-access-vmlzq\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.817523 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl925\" (UniqueName: \"kubernetes.io/projected/3b15fbe3-8f30-41e8-8897-037694ccb56b-kube-api-access-vl925\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.817535 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e854dba-d50f-4228-9b7a-c8a0ae16347a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.817545 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b15fbe3-8f30-41e8-8897-037694ccb56b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.817555 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d3c9160-dd6d-4591-9554-d3c74df3a64e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.817565 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fft6\" (UniqueName: \"kubernetes.io/projected/3e854dba-d50f-4228-9b7a-c8a0ae16347a-kube-api-access-6fft6\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.817574 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85eec6a5-292b-4061-bb90-18904535d9cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:02 crc kubenswrapper[4922]: I0218 11:54:02.817583 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqlnd\" (UniqueName: \"kubernetes.io/projected/85eec6a5-292b-4061-bb90-18904535d9cc-kube-api-access-mqlnd\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:03 crc kubenswrapper[4922]: I0218 11:54:03.527388 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:54:03 crc kubenswrapper[4922]: E0218 11:54:03.527615 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 11:54:03 crc kubenswrapper[4922]: E0218 11:54:03.527653 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 11:54:03 crc kubenswrapper[4922]: E0218 11:54:03.527718 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift podName:0771bdc1-7622-4a65-aa82-3150630ce652 nodeName:}" failed. No retries permitted until 2026-02-18 11:54:11.52769493 +0000 UTC m=+1053.255399010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift") pod "swift-storage-0" (UID: "0771bdc1-7622-4a65-aa82-3150630ce652") : configmap "swift-ring-files" not found Feb 18 11:54:04 crc kubenswrapper[4922]: I0218 11:54:04.263884 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 11:54:04 crc kubenswrapper[4922]: I0218 11:54:04.807548 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hz64r" event={"ID":"82871f18-4432-42c4-bbfd-61ff507a1e95","Type":"ContainerStarted","Data":"f9617d9b57da4e95bb7eb7f0412ba485b6082bcd962a46f87e7d295e47d23bfb"} Feb 18 11:54:04 crc kubenswrapper[4922]: I0218 11:54:04.814533 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:54:04 crc kubenswrapper[4922]: I0218 11:54:04.829736 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-hz64r" podStartSLOduration=6.829713437 podStartE2EDuration="6.829713437s" podCreationTimestamp="2026-02-18 11:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:04.821040796 +0000 UTC m=+1046.548744876" watchObservedRunningTime="2026-02-18 11:54:04.829713437 +0000 UTC m=+1046.557417527" Feb 18 11:54:04 crc kubenswrapper[4922]: I0218 11:54:04.912407 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mt2n6"] Feb 18 11:54:04 crc kubenswrapper[4922]: I0218 11:54:04.912671 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-mt2n6" podUID="66ee71b5-db58-4478-94c7-0067be9c018e" containerName="dnsmasq-dns" containerID="cri-o://4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94" gracePeriod=10 Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.401232 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.461656 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-nb\") pod \"66ee71b5-db58-4478-94c7-0067be9c018e\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.461793 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xglxs\" (UniqueName: \"kubernetes.io/projected/66ee71b5-db58-4478-94c7-0067be9c018e-kube-api-access-xglxs\") pod \"66ee71b5-db58-4478-94c7-0067be9c018e\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.461882 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-config\") pod \"66ee71b5-db58-4478-94c7-0067be9c018e\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.461959 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-sb\") pod \"66ee71b5-db58-4478-94c7-0067be9c018e\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.462038 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-dns-svc\") pod \"66ee71b5-db58-4478-94c7-0067be9c018e\" (UID: \"66ee71b5-db58-4478-94c7-0067be9c018e\") " Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.477938 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ee71b5-db58-4478-94c7-0067be9c018e-kube-api-access-xglxs" (OuterVolumeSpecName: "kube-api-access-xglxs") pod "66ee71b5-db58-4478-94c7-0067be9c018e" (UID: "66ee71b5-db58-4478-94c7-0067be9c018e"). InnerVolumeSpecName "kube-api-access-xglxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.507022 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "66ee71b5-db58-4478-94c7-0067be9c018e" (UID: "66ee71b5-db58-4478-94c7-0067be9c018e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.514894 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-config" (OuterVolumeSpecName: "config") pod "66ee71b5-db58-4478-94c7-0067be9c018e" (UID: "66ee71b5-db58-4478-94c7-0067be9c018e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.517135 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66ee71b5-db58-4478-94c7-0067be9c018e" (UID: "66ee71b5-db58-4478-94c7-0067be9c018e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.532169 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "66ee71b5-db58-4478-94c7-0067be9c018e" (UID: "66ee71b5-db58-4478-94c7-0067be9c018e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.565750 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.565790 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xglxs\" (UniqueName: \"kubernetes.io/projected/66ee71b5-db58-4478-94c7-0067be9c018e-kube-api-access-xglxs\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.565804 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.565815 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.565824 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66ee71b5-db58-4478-94c7-0067be9c018e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.820631 4922 generic.go:334] "Generic (PLEG): container finished" podID="66ee71b5-db58-4478-94c7-0067be9c018e" containerID="4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94" exitCode=0 Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.820710 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mt2n6" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.820729 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mt2n6" event={"ID":"66ee71b5-db58-4478-94c7-0067be9c018e","Type":"ContainerDied","Data":"4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94"} Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.820778 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mt2n6" event={"ID":"66ee71b5-db58-4478-94c7-0067be9c018e","Type":"ContainerDied","Data":"ff48b67de3bebc231cad1e6022f943d477fb1723890c418485b7af3b8fa2de11"} Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.820799 4922 scope.go:117] "RemoveContainer" containerID="4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.824782 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6gzbs" event={"ID":"83fbf909-70fe-4d3c-9b45-3f5a6733779c","Type":"ContainerStarted","Data":"b2a0d9fa0c0ee6a87b933b519a3cc219be9eb78825f3cbe1dcc1c3100e2a1d94"} Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.827324 4922 generic.go:334] "Generic (PLEG): container finished" podID="82871f18-4432-42c4-bbfd-61ff507a1e95" containerID="f9617d9b57da4e95bb7eb7f0412ba485b6082bcd962a46f87e7d295e47d23bfb" exitCode=0 Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.827374 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hz64r" event={"ID":"82871f18-4432-42c4-bbfd-61ff507a1e95","Type":"ContainerDied","Data":"f9617d9b57da4e95bb7eb7f0412ba485b6082bcd962a46f87e7d295e47d23bfb"} Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.860223 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6gzbs" podStartSLOduration=3.733553199 podStartE2EDuration="10.860203156s" podCreationTimestamp="2026-02-18 11:53:55 +0000 UTC" firstStartedPulling="2026-02-18 11:53:57.692815249 +0000 UTC m=+1039.420519329" lastFinishedPulling="2026-02-18 11:54:04.819465206 +0000 UTC m=+1046.547169286" observedRunningTime="2026-02-18 11:54:05.857897207 +0000 UTC m=+1047.585601287" watchObservedRunningTime="2026-02-18 11:54:05.860203156 +0000 UTC m=+1047.587907236" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.915720 4922 scope.go:117] "RemoveContainer" containerID="4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.927260 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mt2n6"] Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.933808 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mt2n6"] Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.953390 4922 scope.go:117] "RemoveContainer" containerID="4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94" Feb 18 11:54:05 crc kubenswrapper[4922]: E0218 11:54:05.954008 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94\": container with ID starting with 4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94 not found: ID does not exist" containerID="4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.954058 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94"} err="failed to get container status \"4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94\": rpc error: code = NotFound desc = could not find container \"4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94\": container with ID starting with 4e3aef5e68dbf272d244ded6025cd71542de6a272f367b19ff7f37bb7a72fc94 not found: ID does not exist" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.954098 4922 scope.go:117] "RemoveContainer" containerID="4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969" Feb 18 11:54:05 crc kubenswrapper[4922]: E0218 11:54:05.954531 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969\": container with ID starting with 4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969 not found: ID does not exist" containerID="4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969" Feb 18 11:54:05 crc kubenswrapper[4922]: I0218 11:54:05.954561 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969"} err="failed to get container status \"4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969\": rpc error: code = NotFound desc = could not find container \"4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969\": container with ID starting with 4802ba16c84d861fcc08bada9e91f7825e817d37cd0acc6e85ba3301fe424969 not found: ID does not exist" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:06.993893 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ee71b5-db58-4478-94c7-0067be9c018e" path="/var/lib/kubelet/pods/66ee71b5-db58-4478-94c7-0067be9c018e/volumes" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.066744 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-st9pz"] Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067376 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3c9160-dd6d-4591-9554-d3c74df3a64e" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067395 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3c9160-dd6d-4591-9554-d3c74df3a64e" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067407 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e854dba-d50f-4228-9b7a-c8a0ae16347a" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067414 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e854dba-d50f-4228-9b7a-c8a0ae16347a" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067433 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85eec6a5-292b-4061-bb90-18904535d9cc" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067439 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="85eec6a5-292b-4061-bb90-18904535d9cc" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067448 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc452273-8a5f-47d8-8aa5-1ddfe2240e28" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067454 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc452273-8a5f-47d8-8aa5-1ddfe2240e28" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067462 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac3a541-a2f7-4d95-97ff-1361fbd3e81e" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067468 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac3a541-a2f7-4d95-97ff-1361fbd3e81e" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067479 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f03ea4-2462-4f2c-b9b8-395fc9802993" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067484 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f03ea4-2462-4f2c-b9b8-395fc9802993" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067492 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b15fbe3-8f30-41e8-8897-037694ccb56b" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067499 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b15fbe3-8f30-41e8-8897-037694ccb56b" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067509 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811ffd65-f5dc-44a3-a1cb-778937ca9771" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067515 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="811ffd65-f5dc-44a3-a1cb-778937ca9771" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067527 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ee71b5-db58-4478-94c7-0067be9c018e" containerName="dnsmasq-dns" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067533 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ee71b5-db58-4478-94c7-0067be9c018e" containerName="dnsmasq-dns" Feb 18 11:54:07 crc kubenswrapper[4922]: E0218 11:54:07.067542 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ee71b5-db58-4478-94c7-0067be9c018e" containerName="init" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067548 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ee71b5-db58-4478-94c7-0067be9c018e" containerName="init" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067704 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="811ffd65-f5dc-44a3-a1cb-778937ca9771" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067715 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ee71b5-db58-4478-94c7-0067be9c018e" containerName="dnsmasq-dns" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067722 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f03ea4-2462-4f2c-b9b8-395fc9802993" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067731 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac3a541-a2f7-4d95-97ff-1361fbd3e81e" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067742 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc452273-8a5f-47d8-8aa5-1ddfe2240e28" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067752 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3c9160-dd6d-4591-9554-d3c74df3a64e" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067759 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e854dba-d50f-4228-9b7a-c8a0ae16347a" containerName="mariadb-database-create" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067769 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="85eec6a5-292b-4061-bb90-18904535d9cc" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.067779 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b15fbe3-8f30-41e8-8897-037694ccb56b" containerName="mariadb-account-create-update" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.068352 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.071226 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jr8f4" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.072798 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.083508 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-st9pz"] Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.204900 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmckq\" (UniqueName: \"kubernetes.io/projected/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-kube-api-access-zmckq\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.204952 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-combined-ca-bundle\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.205030 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-config-data\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.205057 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-db-sync-config-data\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.306356 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmckq\" (UniqueName: \"kubernetes.io/projected/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-kube-api-access-zmckq\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.306412 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-combined-ca-bundle\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.306482 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-config-data\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.306512 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-db-sync-config-data\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.314997 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-combined-ca-bundle\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.316047 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-db-sync-config-data\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.324661 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-config-data\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.325843 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmckq\" (UniqueName: \"kubernetes.io/projected/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-kube-api-access-zmckq\") pod \"glance-db-sync-st9pz\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.406625 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-st9pz" Feb 18 11:54:07 crc kubenswrapper[4922]: I0218 11:54:07.516889 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 11:54:08 crc kubenswrapper[4922]: I0218 11:54:08.872020 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hz64r" event={"ID":"82871f18-4432-42c4-bbfd-61ff507a1e95","Type":"ContainerDied","Data":"3b32203e1127c8bf7749c5d652a55547d30ff0cbc6a4eb374405868fb79465cd"} Feb 18 11:54:08 crc kubenswrapper[4922]: I0218 11:54:08.872270 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b32203e1127c8bf7749c5d652a55547d30ff0cbc6a4eb374405868fb79465cd" Feb 18 11:54:08 crc kubenswrapper[4922]: I0218 11:54:08.931969 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hz64r" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.034478 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82871f18-4432-42c4-bbfd-61ff507a1e95-operator-scripts\") pod \"82871f18-4432-42c4-bbfd-61ff507a1e95\" (UID: \"82871f18-4432-42c4-bbfd-61ff507a1e95\") " Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.034562 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpvzn\" (UniqueName: \"kubernetes.io/projected/82871f18-4432-42c4-bbfd-61ff507a1e95-kube-api-access-bpvzn\") pod \"82871f18-4432-42c4-bbfd-61ff507a1e95\" (UID: \"82871f18-4432-42c4-bbfd-61ff507a1e95\") " Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.035442 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82871f18-4432-42c4-bbfd-61ff507a1e95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82871f18-4432-42c4-bbfd-61ff507a1e95" (UID: "82871f18-4432-42c4-bbfd-61ff507a1e95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.049376 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82871f18-4432-42c4-bbfd-61ff507a1e95-kube-api-access-bpvzn" (OuterVolumeSpecName: "kube-api-access-bpvzn") pod "82871f18-4432-42c4-bbfd-61ff507a1e95" (UID: "82871f18-4432-42c4-bbfd-61ff507a1e95"). InnerVolumeSpecName "kube-api-access-bpvzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.137663 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82871f18-4432-42c4-bbfd-61ff507a1e95-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.137711 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpvzn\" (UniqueName: \"kubernetes.io/projected/82871f18-4432-42c4-bbfd-61ff507a1e95-kube-api-access-bpvzn\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.402695 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-st9pz"] Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.807799 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.808111 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.808158 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.808845 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ecb5c1316b12312a3aa69dd6c259c33fc3f21c1c782e1e74d55d4b725fb05a8"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.808914 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://3ecb5c1316b12312a3aa69dd6c259c33fc3f21c1c782e1e74d55d4b725fb05a8" gracePeriod=600 Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.880570 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-st9pz" event={"ID":"855fb3ec-e473-4a99-a94f-cc96dda6d9c4","Type":"ContainerStarted","Data":"bb03e387f02f6078ba9ca11f5028b069ffe62c115543a3d26dcd8e4428a02edd"} Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.883239 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hz64r" Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.883274 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerStarted","Data":"184f9306d2796d4d920eb675a2e3eb752c81b217570c13dde972d69b9b2436a0"} Feb 18 11:54:09 crc kubenswrapper[4922]: I0218 11:54:09.917857 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=9.950005457 podStartE2EDuration="1m5.917836597s" podCreationTimestamp="2026-02-18 11:53:04 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.280385255 +0000 UTC m=+995.008089335" lastFinishedPulling="2026-02-18 11:54:09.248216395 +0000 UTC m=+1050.975920475" observedRunningTime="2026-02-18 11:54:09.911168007 +0000 UTC m=+1051.638872087" watchObservedRunningTime="2026-02-18 11:54:09.917836597 +0000 UTC m=+1051.645540677" Feb 18 11:54:10 crc kubenswrapper[4922]: I0218 11:54:10.023796 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hz64r"] Feb 18 11:54:10 crc kubenswrapper[4922]: I0218 11:54:10.031133 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hz64r"] Feb 18 11:54:10 crc kubenswrapper[4922]: I0218 11:54:10.529097 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:10 crc kubenswrapper[4922]: I0218 11:54:10.904012 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="3ecb5c1316b12312a3aa69dd6c259c33fc3f21c1c782e1e74d55d4b725fb05a8" exitCode=0 Feb 18 11:54:10 crc kubenswrapper[4922]: I0218 11:54:10.904068 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"3ecb5c1316b12312a3aa69dd6c259c33fc3f21c1c782e1e74d55d4b725fb05a8"} Feb 18 11:54:10 crc kubenswrapper[4922]: I0218 11:54:10.904118 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"6ab0358a6b4b84604aa8265da97113127295fc06806ed22c39a69885110c93fc"} Feb 18 11:54:10 crc kubenswrapper[4922]: I0218 11:54:10.904138 4922 scope.go:117] "RemoveContainer" containerID="0ea4c69ba94e3a69c2a9d6932ead886d1aa8f5af4ec72d79e294ae3c3d8f54dd" Feb 18 11:54:10 crc kubenswrapper[4922]: I0218 11:54:10.983758 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82871f18-4432-42c4-bbfd-61ff507a1e95" path="/var/lib/kubelet/pods/82871f18-4432-42c4-bbfd-61ff507a1e95/volumes" Feb 18 11:54:11 crc kubenswrapper[4922]: I0218 11:54:11.611237 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:54:11 crc kubenswrapper[4922]: E0218 11:54:11.611458 4922 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 11:54:11 crc kubenswrapper[4922]: E0218 11:54:11.611555 4922 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 11:54:11 crc kubenswrapper[4922]: E0218 11:54:11.611631 4922 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift podName:0771bdc1-7622-4a65-aa82-3150630ce652 nodeName:}" failed. No retries permitted until 2026-02-18 11:54:27.611614506 +0000 UTC m=+1069.339318586 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift") pod "swift-storage-0" (UID: "0771bdc1-7622-4a65-aa82-3150630ce652") : configmap "swift-ring-files" not found Feb 18 11:54:11 crc kubenswrapper[4922]: I0218 11:54:11.987317 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-996pg" podUID="a2d0a226-07e2-402d-a868-2f8374670dac" containerName="ovn-controller" probeResult="failure" output=< Feb 18 11:54:11 crc kubenswrapper[4922]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 11:54:11 crc kubenswrapper[4922]: > Feb 18 11:54:12 crc kubenswrapper[4922]: I0218 11:54:12.930715 4922 generic.go:334] "Generic (PLEG): container finished" podID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerID="fcebb6698c6e4874e1a9ec8fbf1e0b1f0b32b5ba69a663e4d66216e0e480bd70" exitCode=0 Feb 18 11:54:12 crc kubenswrapper[4922]: I0218 11:54:12.930822 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cef557d2-b935-4cf6-98f1-d3c2251c0e38","Type":"ContainerDied","Data":"fcebb6698c6e4874e1a9ec8fbf1e0b1f0b32b5ba69a663e4d66216e0e480bd70"} Feb 18 11:54:12 crc kubenswrapper[4922]: I0218 11:54:12.932647 4922 generic.go:334] "Generic (PLEG): container finished" podID="83fbf909-70fe-4d3c-9b45-3f5a6733779c" containerID="b2a0d9fa0c0ee6a87b933b519a3cc219be9eb78825f3cbe1dcc1c3100e2a1d94" exitCode=0 Feb 18 11:54:12 crc kubenswrapper[4922]: I0218 11:54:12.932704 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6gzbs" event={"ID":"83fbf909-70fe-4d3c-9b45-3f5a6733779c","Type":"ContainerDied","Data":"b2a0d9fa0c0ee6a87b933b519a3cc219be9eb78825f3cbe1dcc1c3100e2a1d94"} Feb 18 11:54:12 crc kubenswrapper[4922]: I0218 11:54:12.942490 4922 generic.go:334] "Generic (PLEG): container finished" podID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerID="e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a" exitCode=0 Feb 18 11:54:12 crc kubenswrapper[4922]: I0218 11:54:12.942545 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12b84523-522e-4e8c-b78e-0094262fb1f8","Type":"ContainerDied","Data":"e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a"} Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.738034 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qvjkl"] Feb 18 11:54:13 crc kubenswrapper[4922]: E0218 11:54:13.738897 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82871f18-4432-42c4-bbfd-61ff507a1e95" containerName="mariadb-account-create-update" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.738917 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="82871f18-4432-42c4-bbfd-61ff507a1e95" containerName="mariadb-account-create-update" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.739113 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="82871f18-4432-42c4-bbfd-61ff507a1e95" containerName="mariadb-account-create-update" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.739751 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.742274 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.749267 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qvjkl"] Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.853481 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83953c4e-9a54-453a-880f-d5e4c01608f9-operator-scripts\") pod \"root-account-create-update-qvjkl\" (UID: \"83953c4e-9a54-453a-880f-d5e4c01608f9\") " pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.853674 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdxh\" (UniqueName: \"kubernetes.io/projected/83953c4e-9a54-453a-880f-d5e4c01608f9-kube-api-access-dxdxh\") pod \"root-account-create-update-qvjkl\" (UID: \"83953c4e-9a54-453a-880f-d5e4c01608f9\") " pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.953899 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12b84523-522e-4e8c-b78e-0094262fb1f8","Type":"ContainerStarted","Data":"ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3"} Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.954181 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.955376 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxdxh\" (UniqueName: \"kubernetes.io/projected/83953c4e-9a54-453a-880f-d5e4c01608f9-kube-api-access-dxdxh\") pod \"root-account-create-update-qvjkl\" (UID: \"83953c4e-9a54-453a-880f-d5e4c01608f9\") " pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.955478 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83953c4e-9a54-453a-880f-d5e4c01608f9-operator-scripts\") pod \"root-account-create-update-qvjkl\" (UID: \"83953c4e-9a54-453a-880f-d5e4c01608f9\") " pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.956433 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83953c4e-9a54-453a-880f-d5e4c01608f9-operator-scripts\") pod \"root-account-create-update-qvjkl\" (UID: \"83953c4e-9a54-453a-880f-d5e4c01608f9\") " pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.959716 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cef557d2-b935-4cf6-98f1-d3c2251c0e38","Type":"ContainerStarted","Data":"745e13628b88d2d6249c23c49d872e4f0c6a229af5e85aec709bffe789a60191"} Feb 18 11:54:13 crc kubenswrapper[4922]: I0218 11:54:13.977938 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxdxh\" (UniqueName: \"kubernetes.io/projected/83953c4e-9a54-453a-880f-d5e4c01608f9-kube-api-access-dxdxh\") pod \"root-account-create-update-qvjkl\" (UID: \"83953c4e-9a54-453a-880f-d5e4c01608f9\") " pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.009155 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=65.366206733 podStartE2EDuration="1m17.009128003s" podCreationTimestamp="2026-02-18 11:52:57 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.085861674 +0000 UTC m=+994.813565754" lastFinishedPulling="2026-02-18 11:53:24.728782904 +0000 UTC m=+1006.456487024" observedRunningTime="2026-02-18 11:54:13.998212616 +0000 UTC m=+1055.725916696" watchObservedRunningTime="2026-02-18 11:54:14.009128003 +0000 UTC m=+1055.736832083" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.067153 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.469287 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.502484 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=66.824249365 podStartE2EDuration="1m17.502449646s" podCreationTimestamp="2026-02-18 11:52:57 +0000 UTC" firstStartedPulling="2026-02-18 11:53:13.427054482 +0000 UTC m=+995.154758562" lastFinishedPulling="2026-02-18 11:53:24.105254763 +0000 UTC m=+1005.832958843" observedRunningTime="2026-02-18 11:54:14.047524599 +0000 UTC m=+1055.775228779" watchObservedRunningTime="2026-02-18 11:54:14.502449646 +0000 UTC m=+1056.230153726" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.570458 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83fbf909-70fe-4d3c-9b45-3f5a6733779c-etc-swift\") pod \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.570611 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s79m\" (UniqueName: \"kubernetes.io/projected/83fbf909-70fe-4d3c-9b45-3f5a6733779c-kube-api-access-6s79m\") pod \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.570670 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-ring-data-devices\") pod \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.570843 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-swiftconf\") pod \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.570963 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-scripts\") pod \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.570989 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-combined-ca-bundle\") pod \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.571066 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-dispersionconf\") pod \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\" (UID: \"83fbf909-70fe-4d3c-9b45-3f5a6733779c\") " Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.572093 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "83fbf909-70fe-4d3c-9b45-3f5a6733779c" (UID: "83fbf909-70fe-4d3c-9b45-3f5a6733779c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.572452 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fbf909-70fe-4d3c-9b45-3f5a6733779c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "83fbf909-70fe-4d3c-9b45-3f5a6733779c" (UID: "83fbf909-70fe-4d3c-9b45-3f5a6733779c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.577760 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fbf909-70fe-4d3c-9b45-3f5a6733779c-kube-api-access-6s79m" (OuterVolumeSpecName: "kube-api-access-6s79m") pod "83fbf909-70fe-4d3c-9b45-3f5a6733779c" (UID: "83fbf909-70fe-4d3c-9b45-3f5a6733779c"). InnerVolumeSpecName "kube-api-access-6s79m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.581307 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "83fbf909-70fe-4d3c-9b45-3f5a6733779c" (UID: "83fbf909-70fe-4d3c-9b45-3f5a6733779c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.597557 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "83fbf909-70fe-4d3c-9b45-3f5a6733779c" (UID: "83fbf909-70fe-4d3c-9b45-3f5a6733779c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.601481 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-scripts" (OuterVolumeSpecName: "scripts") pod "83fbf909-70fe-4d3c-9b45-3f5a6733779c" (UID: "83fbf909-70fe-4d3c-9b45-3f5a6733779c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.606811 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83fbf909-70fe-4d3c-9b45-3f5a6733779c" (UID: "83fbf909-70fe-4d3c-9b45-3f5a6733779c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.673788 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s79m\" (UniqueName: \"kubernetes.io/projected/83fbf909-70fe-4d3c-9b45-3f5a6733779c-kube-api-access-6s79m\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.673860 4922 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.673872 4922 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.673882 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fbf909-70fe-4d3c-9b45-3f5a6733779c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.673890 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.673898 4922 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83fbf909-70fe-4d3c-9b45-3f5a6733779c-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.673906 4922 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83fbf909-70fe-4d3c-9b45-3f5a6733779c-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.718914 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qvjkl"] Feb 18 11:54:14 crc kubenswrapper[4922]: W0218 11:54:14.738283 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83953c4e_9a54_453a_880f_d5e4c01608f9.slice/crio-9b3edbf7a895e8645eb5ae0cc01ca375f19a1fd80e1f6971b3313bb081ae8ee8 WatchSource:0}: Error finding container 9b3edbf7a895e8645eb5ae0cc01ca375f19a1fd80e1f6971b3313bb081ae8ee8: Status 404 returned error can't find the container with id 9b3edbf7a895e8645eb5ae0cc01ca375f19a1fd80e1f6971b3313bb081ae8ee8 Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.981890 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6gzbs" Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.998320 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qvjkl" event={"ID":"83953c4e-9a54-453a-880f-d5e4c01608f9","Type":"ContainerStarted","Data":"50e02399793b7c21fb8b885dfb76c0b8a822098669151d568f198c715c6c35d1"} Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.998388 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qvjkl" event={"ID":"83953c4e-9a54-453a-880f-d5e4c01608f9","Type":"ContainerStarted","Data":"9b3edbf7a895e8645eb5ae0cc01ca375f19a1fd80e1f6971b3313bb081ae8ee8"} Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.998404 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6gzbs" event={"ID":"83fbf909-70fe-4d3c-9b45-3f5a6733779c","Type":"ContainerDied","Data":"7cb68b07b7628d48bbfa07f469ab2d1994a576e88de08a8b50b0bca8c09412b9"} Feb 18 11:54:14 crc kubenswrapper[4922]: I0218 11:54:14.998420 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb68b07b7628d48bbfa07f469ab2d1994a576e88de08a8b50b0bca8c09412b9" Feb 18 11:54:15 crc kubenswrapper[4922]: I0218 11:54:15.017051 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qvjkl" podStartSLOduration=2.017031438 podStartE2EDuration="2.017031438s" podCreationTimestamp="2026-02-18 11:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:15.003446413 +0000 UTC m=+1056.731150503" watchObservedRunningTime="2026-02-18 11:54:15.017031438 +0000 UTC m=+1056.744735518" Feb 18 11:54:15 crc kubenswrapper[4922]: I0218 11:54:15.992424 4922 generic.go:334] "Generic (PLEG): container finished" podID="83953c4e-9a54-453a-880f-d5e4c01608f9" containerID="50e02399793b7c21fb8b885dfb76c0b8a822098669151d568f198c715c6c35d1" exitCode=0 Feb 18 11:54:15 crc kubenswrapper[4922]: I0218 11:54:15.992575 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qvjkl" event={"ID":"83953c4e-9a54-453a-880f-d5e4c01608f9","Type":"ContainerDied","Data":"50e02399793b7c21fb8b885dfb76c0b8a822098669151d568f198c715c6c35d1"} Feb 18 11:54:16 crc kubenswrapper[4922]: I0218 11:54:16.983483 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:54:16 crc kubenswrapper[4922]: I0218 11:54:16.992253 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-stvc7" Feb 18 11:54:16 crc kubenswrapper[4922]: I0218 11:54:16.994990 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-996pg" podUID="a2d0a226-07e2-402d-a868-2f8374670dac" containerName="ovn-controller" probeResult="failure" output=< Feb 18 11:54:16 crc kubenswrapper[4922]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 11:54:16 crc kubenswrapper[4922]: > Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.252083 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-996pg-config-mkd9k"] Feb 18 11:54:17 crc kubenswrapper[4922]: E0218 11:54:17.252810 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fbf909-70fe-4d3c-9b45-3f5a6733779c" containerName="swift-ring-rebalance" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.252836 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fbf909-70fe-4d3c-9b45-3f5a6733779c" containerName="swift-ring-rebalance" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.253085 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fbf909-70fe-4d3c-9b45-3f5a6733779c" containerName="swift-ring-rebalance" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.253794 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.256068 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.260649 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-996pg-config-mkd9k"] Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.325382 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc75n\" (UniqueName: \"kubernetes.io/projected/82ba5186-f0aa-4d19-a516-254374dba75f-kube-api-access-pc75n\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.325564 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-log-ovn\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.325596 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.325772 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run-ovn\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.325824 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-scripts\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.325853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-additional-scripts\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.427413 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run-ovn\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.427486 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-scripts\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.427510 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-additional-scripts\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.427546 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc75n\" (UniqueName: \"kubernetes.io/projected/82ba5186-f0aa-4d19-a516-254374dba75f-kube-api-access-pc75n\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.427631 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-log-ovn\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.427662 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.427764 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.427824 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-log-ovn\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.428029 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run-ovn\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.428592 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-additional-scripts\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.429738 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-scripts\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.458992 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc75n\" (UniqueName: \"kubernetes.io/projected/82ba5186-f0aa-4d19-a516-254374dba75f-kube-api-access-pc75n\") pod \"ovn-controller-996pg-config-mkd9k\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:17 crc kubenswrapper[4922]: I0218 11:54:17.569908 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:18 crc kubenswrapper[4922]: I0218 11:54:18.525601 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 11:54:20 crc kubenswrapper[4922]: I0218 11:54:20.528976 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:20 crc kubenswrapper[4922]: I0218 11:54:20.533035 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:21 crc kubenswrapper[4922]: I0218 11:54:21.058061 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:21 crc kubenswrapper[4922]: I0218 11:54:21.993441 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-996pg" podUID="a2d0a226-07e2-402d-a868-2f8374670dac" containerName="ovn-controller" probeResult="failure" output=< Feb 18 11:54:21 crc kubenswrapper[4922]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 11:54:21 crc kubenswrapper[4922]: > Feb 18 11:54:23 crc kubenswrapper[4922]: I0218 11:54:23.347752 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:54:23 crc kubenswrapper[4922]: I0218 11:54:23.348063 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="prometheus" containerID="cri-o://e96b484d9840abfb2c7757906d574c4d66e64e51bbf1dbb2212180e14faff247" gracePeriod=600 Feb 18 11:54:23 crc kubenswrapper[4922]: I0218 11:54:23.348582 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="thanos-sidecar" containerID="cri-o://184f9306d2796d4d920eb675a2e3eb752c81b217570c13dde972d69b9b2436a0" gracePeriod=600 Feb 18 11:54:23 crc kubenswrapper[4922]: I0218 11:54:23.348643 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="config-reloader" containerID="cri-o://3f427700514946e216712d6593eedc4f63a171beb78d1aacdd21101e9bbb6fd9" gracePeriod=600 Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.091768 4922 generic.go:334] "Generic (PLEG): container finished" podID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerID="184f9306d2796d4d920eb675a2e3eb752c81b217570c13dde972d69b9b2436a0" exitCode=0 Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.091806 4922 generic.go:334] "Generic (PLEG): container finished" podID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerID="3f427700514946e216712d6593eedc4f63a171beb78d1aacdd21101e9bbb6fd9" exitCode=0 Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.091814 4922 generic.go:334] "Generic (PLEG): container finished" podID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerID="e96b484d9840abfb2c7757906d574c4d66e64e51bbf1dbb2212180e14faff247" exitCode=0 Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.091834 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerDied","Data":"184f9306d2796d4d920eb675a2e3eb752c81b217570c13dde972d69b9b2436a0"} Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.091862 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerDied","Data":"3f427700514946e216712d6593eedc4f63a171beb78d1aacdd21101e9bbb6fd9"} Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.091874 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerDied","Data":"e96b484d9840abfb2c7757906d574c4d66e64e51bbf1dbb2212180e14faff247"} Feb 18 11:54:24 crc kubenswrapper[4922]: E0218 11:54:24.705204 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 18 11:54:24 crc kubenswrapper[4922]: E0218 11:54:24.705449 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmckq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-st9pz_openstack(855fb3ec-e473-4a99-a94f-cc96dda6d9c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:54:24 crc kubenswrapper[4922]: E0218 11:54:24.707245 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-st9pz" podUID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.737905 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.785577 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxdxh\" (UniqueName: \"kubernetes.io/projected/83953c4e-9a54-453a-880f-d5e4c01608f9-kube-api-access-dxdxh\") pod \"83953c4e-9a54-453a-880f-d5e4c01608f9\" (UID: \"83953c4e-9a54-453a-880f-d5e4c01608f9\") " Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.785716 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83953c4e-9a54-453a-880f-d5e4c01608f9-operator-scripts\") pod \"83953c4e-9a54-453a-880f-d5e4c01608f9\" (UID: \"83953c4e-9a54-453a-880f-d5e4c01608f9\") " Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.787644 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83953c4e-9a54-453a-880f-d5e4c01608f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83953c4e-9a54-453a-880f-d5e4c01608f9" (UID: "83953c4e-9a54-453a-880f-d5e4c01608f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.795868 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83953c4e-9a54-453a-880f-d5e4c01608f9-kube-api-access-dxdxh" (OuterVolumeSpecName: "kube-api-access-dxdxh") pod "83953c4e-9a54-453a-880f-d5e4c01608f9" (UID: "83953c4e-9a54-453a-880f-d5e4c01608f9"). InnerVolumeSpecName "kube-api-access-dxdxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.888875 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxdxh\" (UniqueName: \"kubernetes.io/projected/83953c4e-9a54-453a-880f-d5e4c01608f9-kube-api-access-dxdxh\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:24 crc kubenswrapper[4922]: I0218 11:54:24.888920 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83953c4e-9a54-453a-880f-d5e4c01608f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.043946 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.091567 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.091871 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config-out\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.092033 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-thanos-prometheus-http-client-file\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.092147 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d6c8\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-kube-api-access-2d6c8\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.092226 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-0\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.092414 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.092739 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-2\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.092862 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-1\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.092946 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-tls-assets\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.093031 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-web-config\") pod \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\" (UID: \"e35b9ac7-2e11-4096-a77a-4be1a41d737f\") " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.098913 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.108531 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.108563 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.109628 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.113724 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-kube-api-access-2d6c8" (OuterVolumeSpecName: "kube-api-access-2d6c8") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "kube-api-access-2d6c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.119579 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config-out" (OuterVolumeSpecName: "config-out") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.120040 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qvjkl" event={"ID":"83953c4e-9a54-453a-880f-d5e4c01608f9","Type":"ContainerDied","Data":"9b3edbf7a895e8645eb5ae0cc01ca375f19a1fd80e1f6971b3313bb081ae8ee8"} Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.120086 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3edbf7a895e8645eb5ae0cc01ca375f19a1fd80e1f6971b3313bb081ae8ee8" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.120177 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qvjkl" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.126344 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e35b9ac7-2e11-4096-a77a-4be1a41d737f","Type":"ContainerDied","Data":"bf8e622508a488fa6a5aab10a0db437c746f98562fadc68cb32fcd3c28724d08"} Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.126421 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.126429 4922 scope.go:117] "RemoveContainer" containerID="184f9306d2796d4d920eb675a2e3eb752c81b217570c13dde972d69b9b2436a0" Feb 18 11:54:25 crc kubenswrapper[4922]: E0218 11:54:25.128948 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-st9pz" podUID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.130942 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.133130 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config" (OuterVolumeSpecName: "config") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.147885 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "pvc-15ee050c-37dd-43df-8cbe-1200f09a5545". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.175876 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-web-config" (OuterVolumeSpecName: "web-config") pod "e35b9ac7-2e11-4096-a77a-4be1a41d737f" (UID: "e35b9ac7-2e11-4096-a77a-4be1a41d737f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.176018 4922 scope.go:117] "RemoveContainer" containerID="3f427700514946e216712d6593eedc4f63a171beb78d1aacdd21101e9bbb6fd9" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196496 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") on node \"crc\" " Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196526 4922 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196543 4922 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196554 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d6c8\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-kube-api-access-2d6c8\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196566 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196576 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196586 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196595 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e35b9ac7-2e11-4096-a77a-4be1a41d737f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196604 4922 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e35b9ac7-2e11-4096-a77a-4be1a41d737f-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.196613 4922 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e35b9ac7-2e11-4096-a77a-4be1a41d737f-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.197663 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-996pg-config-mkd9k"] Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.208680 4922 scope.go:117] "RemoveContainer" containerID="e96b484d9840abfb2c7757906d574c4d66e64e51bbf1dbb2212180e14faff247" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.214721 4922 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.214898 4922 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15ee050c-37dd-43df-8cbe-1200f09a5545" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545") on node "crc" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.239402 4922 scope.go:117] "RemoveContainer" containerID="8a6f423971fc2df44ffe536b7fffc2792fbaae14dda7a484a39a6fbec81c8d3e" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.298077 4922 reconciler_common.go:293] "Volume detached for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.464324 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.472160 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508176 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:54:25 crc kubenswrapper[4922]: E0218 11:54:25.508617 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="init-config-reloader" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508642 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="init-config-reloader" Feb 18 11:54:25 crc kubenswrapper[4922]: E0218 11:54:25.508656 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="thanos-sidecar" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508664 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="thanos-sidecar" Feb 18 11:54:25 crc kubenswrapper[4922]: E0218 11:54:25.508681 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83953c4e-9a54-453a-880f-d5e4c01608f9" containerName="mariadb-account-create-update" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508692 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="83953c4e-9a54-453a-880f-d5e4c01608f9" containerName="mariadb-account-create-update" Feb 18 11:54:25 crc kubenswrapper[4922]: E0218 11:54:25.508713 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="prometheus" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508721 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="prometheus" Feb 18 11:54:25 crc kubenswrapper[4922]: E0218 11:54:25.508731 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="config-reloader" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508740 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="config-reloader" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508941 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="83953c4e-9a54-453a-880f-d5e4c01608f9" containerName="mariadb-account-create-update" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508960 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="thanos-sidecar" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508973 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="config-reloader" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.508985 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" containerName="prometheus" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.510878 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.522432 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.522582 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.522649 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.522918 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.523277 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xmthr" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.523466 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.523580 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.523653 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.534588 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.539750 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.602835 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.602875 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.602978 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603069 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603117 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603152 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603226 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603376 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603497 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603561 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603618 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5t7\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-kube-api-access-vp5t7\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603738 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.603809 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705547 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705639 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705688 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705715 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5t7\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-kube-api-access-vp5t7\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705750 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705775 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705834 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705858 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705890 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705920 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705947 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.705973 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.706012 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.707788 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.707989 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.708044 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.710776 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.711191 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.711464 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.716809 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.717342 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.718105 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.718172 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.718532 4922 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.718557 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d461f0c4a551673a0d7d7003637451f1312f1b9722a2159a051859daee296e97/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.721866 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.729852 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5t7\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-kube-api-access-vp5t7\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.754166 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:25 crc kubenswrapper[4922]: I0218 11:54:25.834999 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 11:54:26 crc kubenswrapper[4922]: I0218 11:54:26.145317 4922 generic.go:334] "Generic (PLEG): container finished" podID="82ba5186-f0aa-4d19-a516-254374dba75f" containerID="5176cb9980de6bcd0a67b80f4ff01a72286ab10295b4c3d177fe01a39914f0b0" exitCode=0 Feb 18 11:54:26 crc kubenswrapper[4922]: I0218 11:54:26.145617 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-996pg-config-mkd9k" event={"ID":"82ba5186-f0aa-4d19-a516-254374dba75f","Type":"ContainerDied","Data":"5176cb9980de6bcd0a67b80f4ff01a72286ab10295b4c3d177fe01a39914f0b0"} Feb 18 11:54:26 crc kubenswrapper[4922]: I0218 11:54:26.145643 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-996pg-config-mkd9k" event={"ID":"82ba5186-f0aa-4d19-a516-254374dba75f","Type":"ContainerStarted","Data":"b06f706f916de7eacc8759839f38b70d5279b78a62e299714b26a81bb5f0da22"} Feb 18 11:54:26 crc kubenswrapper[4922]: W0218 11:54:26.205783 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791eb2d2_e8b0_401d_8b8b_b79d675c1ca4.slice/crio-eb650b51fbe53331680115b5b916f11fa8f568a0682dba6d2801da306b88e53d WatchSource:0}: Error finding container eb650b51fbe53331680115b5b916f11fa8f568a0682dba6d2801da306b88e53d: Status 404 returned error can't find the container with id eb650b51fbe53331680115b5b916f11fa8f568a0682dba6d2801da306b88e53d Feb 18 11:54:26 crc kubenswrapper[4922]: I0218 11:54:26.211015 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 11:54:26 crc kubenswrapper[4922]: I0218 11:54:26.983964 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35b9ac7-2e11-4096-a77a-4be1a41d737f" path="/var/lib/kubelet/pods/e35b9ac7-2e11-4096-a77a-4be1a41d737f/volumes" Feb 18 11:54:26 crc kubenswrapper[4922]: I0218 11:54:26.985011 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-996pg" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.154223 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerStarted","Data":"eb650b51fbe53331680115b5b916f11fa8f568a0682dba6d2801da306b88e53d"} Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.507817 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.538817 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-additional-scripts\") pod \"82ba5186-f0aa-4d19-a516-254374dba75f\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.538867 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run\") pod \"82ba5186-f0aa-4d19-a516-254374dba75f\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.538945 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-scripts\") pod \"82ba5186-f0aa-4d19-a516-254374dba75f\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.538978 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run-ovn\") pod \"82ba5186-f0aa-4d19-a516-254374dba75f\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.539008 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc75n\" (UniqueName: \"kubernetes.io/projected/82ba5186-f0aa-4d19-a516-254374dba75f-kube-api-access-pc75n\") pod \"82ba5186-f0aa-4d19-a516-254374dba75f\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.539039 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-log-ovn\") pod \"82ba5186-f0aa-4d19-a516-254374dba75f\" (UID: \"82ba5186-f0aa-4d19-a516-254374dba75f\") " Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.539033 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run" (OuterVolumeSpecName: "var-run") pod "82ba5186-f0aa-4d19-a516-254374dba75f" (UID: "82ba5186-f0aa-4d19-a516-254374dba75f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.539094 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "82ba5186-f0aa-4d19-a516-254374dba75f" (UID: "82ba5186-f0aa-4d19-a516-254374dba75f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.539552 4922 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.539572 4922 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.539614 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "82ba5186-f0aa-4d19-a516-254374dba75f" (UID: "82ba5186-f0aa-4d19-a516-254374dba75f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.539834 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "82ba5186-f0aa-4d19-a516-254374dba75f" (UID: "82ba5186-f0aa-4d19-a516-254374dba75f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.540304 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-scripts" (OuterVolumeSpecName: "scripts") pod "82ba5186-f0aa-4d19-a516-254374dba75f" (UID: "82ba5186-f0aa-4d19-a516-254374dba75f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.545291 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ba5186-f0aa-4d19-a516-254374dba75f-kube-api-access-pc75n" (OuterVolumeSpecName: "kube-api-access-pc75n") pod "82ba5186-f0aa-4d19-a516-254374dba75f" (UID: "82ba5186-f0aa-4d19-a516-254374dba75f"). InnerVolumeSpecName "kube-api-access-pc75n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.641231 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.641340 4922 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82ba5186-f0aa-4d19-a516-254374dba75f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.641352 4922 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.641373 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82ba5186-f0aa-4d19-a516-254374dba75f-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.641382 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc75n\" (UniqueName: \"kubernetes.io/projected/82ba5186-f0aa-4d19-a516-254374dba75f-kube-api-access-pc75n\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.645898 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0771bdc1-7622-4a65-aa82-3150630ce652-etc-swift\") pod \"swift-storage-0\" (UID: \"0771bdc1-7622-4a65-aa82-3150630ce652\") " pod="openstack/swift-storage-0" Feb 18 11:54:27 crc kubenswrapper[4922]: I0218 11:54:27.743389 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.164589 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-996pg-config-mkd9k" event={"ID":"82ba5186-f0aa-4d19-a516-254374dba75f","Type":"ContainerDied","Data":"b06f706f916de7eacc8759839f38b70d5279b78a62e299714b26a81bb5f0da22"} Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.164883 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06f706f916de7eacc8759839f38b70d5279b78a62e299714b26a81bb5f0da22" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.164725 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-996pg-config-mkd9k" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.257273 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 11:54:28 crc kubenswrapper[4922]: W0218 11:54:28.365562 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0771bdc1_7622_4a65_aa82_3150630ce652.slice/crio-38b08c7945f70d47345b2139c350579b55cad071b52f78b49b6dc84b2b6dc24f WatchSource:0}: Error finding container 38b08c7945f70d47345b2139c350579b55cad071b52f78b49b6dc84b2b6dc24f: Status 404 returned error can't find the container with id 38b08c7945f70d47345b2139c350579b55cad071b52f78b49b6dc84b2b6dc24f Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.528521 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.643687 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-996pg-config-mkd9k"] Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.653625 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-996pg-config-mkd9k"] Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.873808 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.994575 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ba5186-f0aa-4d19-a516-254374dba75f" path="/var/lib/kubelet/pods/82ba5186-f0aa-4d19-a516-254374dba75f/volumes" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.995450 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-g95qz"] Feb 18 11:54:28 crc kubenswrapper[4922]: E0218 11:54:28.995920 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ba5186-f0aa-4d19-a516-254374dba75f" containerName="ovn-config" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.996015 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ba5186-f0aa-4d19-a516-254374dba75f" containerName="ovn-config" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.996399 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ba5186-f0aa-4d19-a516-254374dba75f" containerName="ovn-config" Feb 18 11:54:28 crc kubenswrapper[4922]: I0218 11:54:28.997307 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.054969 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g95qz"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.139329 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-sznrv"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.143696 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.147069 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-sznrv"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.152773 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-82jsz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.153015 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.170531 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkbqt\" (UniqueName: \"kubernetes.io/projected/b521417c-1968-49ee-8435-9e44af7e8a52-kube-api-access-wkbqt\") pod \"cinder-db-create-g95qz\" (UID: \"b521417c-1968-49ee-8435-9e44af7e8a52\") " pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.171383 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b521417c-1968-49ee-8435-9e44af7e8a52-operator-scripts\") pod \"cinder-db-create-g95qz\" (UID: \"b521417c-1968-49ee-8435-9e44af7e8a52\") " pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.180123 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"38b08c7945f70d47345b2139c350579b55cad071b52f78b49b6dc84b2b6dc24f"} Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.185658 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f0db-account-create-update-vwv99"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.186848 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.200635 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.251352 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f0db-account-create-update-vwv99"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.273353 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-combined-ca-bundle\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.273454 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmr9\" (UniqueName: \"kubernetes.io/projected/87604619-ec13-480d-9456-c5062685287d-kube-api-access-2cmr9\") pod \"cinder-f0db-account-create-update-vwv99\" (UID: \"87604619-ec13-480d-9456-c5062685287d\") " pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.273528 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkbqt\" (UniqueName: \"kubernetes.io/projected/b521417c-1968-49ee-8435-9e44af7e8a52-kube-api-access-wkbqt\") pod \"cinder-db-create-g95qz\" (UID: \"b521417c-1968-49ee-8435-9e44af7e8a52\") " pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.273572 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87604619-ec13-480d-9456-c5062685287d-operator-scripts\") pod \"cinder-f0db-account-create-update-vwv99\" (UID: \"87604619-ec13-480d-9456-c5062685287d\") " pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.273614 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-config-data\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.273660 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-db-sync-config-data\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.273693 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b521417c-1968-49ee-8435-9e44af7e8a52-operator-scripts\") pod \"cinder-db-create-g95qz\" (UID: \"b521417c-1968-49ee-8435-9e44af7e8a52\") " pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.273797 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxwsh\" (UniqueName: \"kubernetes.io/projected/5f835c05-4bbb-4678-9410-8523cf308f05-kube-api-access-cxwsh\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.274858 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b521417c-1968-49ee-8435-9e44af7e8a52-operator-scripts\") pod \"cinder-db-create-g95qz\" (UID: \"b521417c-1968-49ee-8435-9e44af7e8a52\") " pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.338695 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkbqt\" (UniqueName: \"kubernetes.io/projected/b521417c-1968-49ee-8435-9e44af7e8a52-kube-api-access-wkbqt\") pod \"cinder-db-create-g95qz\" (UID: \"b521417c-1968-49ee-8435-9e44af7e8a52\") " pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.339129 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.375173 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87604619-ec13-480d-9456-c5062685287d-operator-scripts\") pod \"cinder-f0db-account-create-update-vwv99\" (UID: \"87604619-ec13-480d-9456-c5062685287d\") " pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.375486 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-config-data\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.375575 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-db-sync-config-data\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.375702 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxwsh\" (UniqueName: \"kubernetes.io/projected/5f835c05-4bbb-4678-9410-8523cf308f05-kube-api-access-cxwsh\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.375786 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-combined-ca-bundle\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.375864 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmr9\" (UniqueName: \"kubernetes.io/projected/87604619-ec13-480d-9456-c5062685287d-kube-api-access-2cmr9\") pod \"cinder-f0db-account-create-update-vwv99\" (UID: \"87604619-ec13-480d-9456-c5062685287d\") " pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.376117 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87604619-ec13-480d-9456-c5062685287d-operator-scripts\") pod \"cinder-f0db-account-create-update-vwv99\" (UID: \"87604619-ec13-480d-9456-c5062685287d\") " pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.381800 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-config-data\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.397932 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-db-sync-config-data\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.399516 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-combined-ca-bundle\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.437187 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmr9\" (UniqueName: \"kubernetes.io/projected/87604619-ec13-480d-9456-c5062685287d-kube-api-access-2cmr9\") pod \"cinder-f0db-account-create-update-vwv99\" (UID: \"87604619-ec13-480d-9456-c5062685287d\") " pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.437288 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxwsh\" (UniqueName: \"kubernetes.io/projected/5f835c05-4bbb-4678-9410-8523cf308f05-kube-api-access-cxwsh\") pod \"watcher-db-sync-sznrv\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.449479 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-59fc-account-create-update-29wvh"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.450842 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.455521 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.474420 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-sznrv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.478273 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-czsfv"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.479711 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.497873 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-59fc-account-create-update-29wvh"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.521426 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-czsfv"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.526581 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.586841 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fnkcj"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.592240 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgkd4\" (UniqueName: \"kubernetes.io/projected/3f413eca-d25a-4b47-82f6-e25088b65f2d-kube-api-access-qgkd4\") pod \"barbican-db-create-czsfv\" (UID: \"3f413eca-d25a-4b47-82f6-e25088b65f2d\") " pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.592286 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f413eca-d25a-4b47-82f6-e25088b65f2d-operator-scripts\") pod \"barbican-db-create-czsfv\" (UID: \"3f413eca-d25a-4b47-82f6-e25088b65f2d\") " pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.592348 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mhx\" (UniqueName: \"kubernetes.io/projected/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-kube-api-access-65mhx\") pod \"barbican-59fc-account-create-update-29wvh\" (UID: \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\") " pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.592399 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-operator-scripts\") pod \"barbican-59fc-account-create-update-29wvh\" (UID: \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\") " pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.593223 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.598254 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fnkcj"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.628303 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r8rn4" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.628968 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.629296 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.629414 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.640023 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b89-account-create-update-x8q45"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.645301 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.661723 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.696497 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-combined-ca-bundle\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.696640 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgkd4\" (UniqueName: \"kubernetes.io/projected/3f413eca-d25a-4b47-82f6-e25088b65f2d-kube-api-access-qgkd4\") pod \"barbican-db-create-czsfv\" (UID: \"3f413eca-d25a-4b47-82f6-e25088b65f2d\") " pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.696669 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsps6\" (UniqueName: \"kubernetes.io/projected/2102ef9b-8151-4edf-8b43-7c4486203911-kube-api-access-bsps6\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.696698 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f413eca-d25a-4b47-82f6-e25088b65f2d-operator-scripts\") pod \"barbican-db-create-czsfv\" (UID: \"3f413eca-d25a-4b47-82f6-e25088b65f2d\") " pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.696724 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-config-data\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.696932 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65mhx\" (UniqueName: \"kubernetes.io/projected/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-kube-api-access-65mhx\") pod \"barbican-59fc-account-create-update-29wvh\" (UID: \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\") " pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.696978 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-operator-scripts\") pod \"barbican-59fc-account-create-update-29wvh\" (UID: \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\") " pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.697729 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f413eca-d25a-4b47-82f6-e25088b65f2d-operator-scripts\") pod \"barbican-db-create-czsfv\" (UID: \"3f413eca-d25a-4b47-82f6-e25088b65f2d\") " pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.697899 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-operator-scripts\") pod \"barbican-59fc-account-create-update-29wvh\" (UID: \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\") " pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.712505 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b89-account-create-update-x8q45"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.741504 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mvrlh"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.743009 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.753571 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgkd4\" (UniqueName: \"kubernetes.io/projected/3f413eca-d25a-4b47-82f6-e25088b65f2d-kube-api-access-qgkd4\") pod \"barbican-db-create-czsfv\" (UID: \"3f413eca-d25a-4b47-82f6-e25088b65f2d\") " pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.754489 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65mhx\" (UniqueName: \"kubernetes.io/projected/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-kube-api-access-65mhx\") pod \"barbican-59fc-account-create-update-29wvh\" (UID: \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\") " pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.765255 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mvrlh"] Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.798226 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-combined-ca-bundle\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.798318 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsps6\" (UniqueName: \"kubernetes.io/projected/2102ef9b-8151-4edf-8b43-7c4486203911-kube-api-access-bsps6\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.798344 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e55f2d-153c-47a0-95c4-84f8795ca57e-operator-scripts\") pod \"neutron-5b89-account-create-update-x8q45\" (UID: \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\") " pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.798404 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-config-data\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.798474 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9djps\" (UniqueName: \"kubernetes.io/projected/b9e55f2d-153c-47a0-95c4-84f8795ca57e-kube-api-access-9djps\") pod \"neutron-5b89-account-create-update-x8q45\" (UID: \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\") " pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.812111 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-combined-ca-bundle\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.817707 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-config-data\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.818052 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsps6\" (UniqueName: \"kubernetes.io/projected/2102ef9b-8151-4edf-8b43-7c4486203911-kube-api-access-bsps6\") pod \"keystone-db-sync-fnkcj\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.846747 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.876304 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.902179 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-operator-scripts\") pod \"neutron-db-create-mvrlh\" (UID: \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\") " pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.902242 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e55f2d-153c-47a0-95c4-84f8795ca57e-operator-scripts\") pod \"neutron-5b89-account-create-update-x8q45\" (UID: \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\") " pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.902323 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9djps\" (UniqueName: \"kubernetes.io/projected/b9e55f2d-153c-47a0-95c4-84f8795ca57e-kube-api-access-9djps\") pod \"neutron-5b89-account-create-update-x8q45\" (UID: \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\") " pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.902355 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhhfx\" (UniqueName: \"kubernetes.io/projected/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-kube-api-access-qhhfx\") pod \"neutron-db-create-mvrlh\" (UID: \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\") " pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.907585 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e55f2d-153c-47a0-95c4-84f8795ca57e-operator-scripts\") pod \"neutron-5b89-account-create-update-x8q45\" (UID: \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\") " pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.959088 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9djps\" (UniqueName: \"kubernetes.io/projected/b9e55f2d-153c-47a0-95c4-84f8795ca57e-kube-api-access-9djps\") pod \"neutron-5b89-account-create-update-x8q45\" (UID: \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\") " pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:29 crc kubenswrapper[4922]: I0218 11:54:29.972905 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.005883 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhhfx\" (UniqueName: \"kubernetes.io/projected/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-kube-api-access-qhhfx\") pod \"neutron-db-create-mvrlh\" (UID: \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\") " pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.006018 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-operator-scripts\") pod \"neutron-db-create-mvrlh\" (UID: \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\") " pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.006703 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-operator-scripts\") pod \"neutron-db-create-mvrlh\" (UID: \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\") " pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.056493 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhhfx\" (UniqueName: \"kubernetes.io/projected/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-kube-api-access-qhhfx\") pod \"neutron-db-create-mvrlh\" (UID: \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\") " pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.073667 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.112539 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.164182 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qvjkl"] Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.183242 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qvjkl"] Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.201584 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerStarted","Data":"66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8"} Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.252395 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g95qz"] Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.415945 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-sznrv"] Feb 18 11:54:30 crc kubenswrapper[4922]: W0218 11:54:30.458927 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f835c05_4bbb_4678_9410_8523cf308f05.slice/crio-64fe53faccd6664eee091113f52c4e7d0ab540979e620f3862763ef2ec700f6d WatchSource:0}: Error finding container 64fe53faccd6664eee091113f52c4e7d0ab540979e620f3862763ef2ec700f6d: Status 404 returned error can't find the container with id 64fe53faccd6664eee091113f52c4e7d0ab540979e620f3862763ef2ec700f6d Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.476256 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-czsfv"] Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.539107 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f0db-account-create-update-vwv99"] Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.651106 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-59fc-account-create-update-29wvh"] Feb 18 11:54:30 crc kubenswrapper[4922]: W0218 11:54:30.692929 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76ba1bc1_3352_42a4_a80e_2fc2ac0e66eb.slice/crio-1d81707c5bd2ed4c306d7d2993e386735f9bc3f51595e0bf23abec0c6fba5f1a WatchSource:0}: Error finding container 1d81707c5bd2ed4c306d7d2993e386735f9bc3f51595e0bf23abec0c6fba5f1a: Status 404 returned error can't find the container with id 1d81707c5bd2ed4c306d7d2993e386735f9bc3f51595e0bf23abec0c6fba5f1a Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.701426 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b89-account-create-update-x8q45"] Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.870860 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fnkcj"] Feb 18 11:54:30 crc kubenswrapper[4922]: I0218 11:54:30.908499 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mvrlh"] Feb 18 11:54:30 crc kubenswrapper[4922]: W0218 11:54:30.913711 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46a88eb1_e58a_437e_b1eb_2dcb7e80b37f.slice/crio-c03df322018b9138faee6254589c424bb6e088b4296997dd45244c942e4db7ab WatchSource:0}: Error finding container c03df322018b9138faee6254589c424bb6e088b4296997dd45244c942e4db7ab: Status 404 returned error can't find the container with id c03df322018b9138faee6254589c424bb6e088b4296997dd45244c942e4db7ab Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.015774 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83953c4e-9a54-453a-880f-d5e4c01608f9" path="/var/lib/kubelet/pods/83953c4e-9a54-453a-880f-d5e4c01608f9/volumes" Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.211591 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g95qz" event={"ID":"b521417c-1968-49ee-8435-9e44af7e8a52","Type":"ContainerStarted","Data":"9b8999d1e83a0573e94d5cd921d708e3242c66fa9f1b39ddd8ee350a09e0dbf3"} Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.213448 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f0db-account-create-update-vwv99" event={"ID":"87604619-ec13-480d-9456-c5062685287d","Type":"ContainerStarted","Data":"2bdf174ced138c4cd3a885471b1652700fd63cb3ebd9aa23d03d3a168ff65eb0"} Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.215276 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-sznrv" event={"ID":"5f835c05-4bbb-4678-9410-8523cf308f05","Type":"ContainerStarted","Data":"64fe53faccd6664eee091113f52c4e7d0ab540979e620f3862763ef2ec700f6d"} Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.217947 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-59fc-account-create-update-29wvh" event={"ID":"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb","Type":"ContainerStarted","Data":"1d81707c5bd2ed4c306d7d2993e386735f9bc3f51595e0bf23abec0c6fba5f1a"} Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.219555 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mvrlh" event={"ID":"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f","Type":"ContainerStarted","Data":"c03df322018b9138faee6254589c424bb6e088b4296997dd45244c942e4db7ab"} Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.222056 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fnkcj" event={"ID":"2102ef9b-8151-4edf-8b43-7c4486203911","Type":"ContainerStarted","Data":"e7cf1c0a3ad45fa91a71f51e0bf0901154e6b84ba139717db0e095e711718562"} Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.223816 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b89-account-create-update-x8q45" event={"ID":"b9e55f2d-153c-47a0-95c4-84f8795ca57e","Type":"ContainerStarted","Data":"4d88076fc7a40e7ade149549d2f2ef5ee9743a2e42595b2835c326127fe730f0"} Feb 18 11:54:31 crc kubenswrapper[4922]: I0218 11:54:31.226652 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-czsfv" event={"ID":"3f413eca-d25a-4b47-82f6-e25088b65f2d","Type":"ContainerStarted","Data":"284db3161b4e11bde939d2e242b4a6f4a60d3fc946f9e2e18054dbeb8b0bcecd"} Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.233899 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f0db-account-create-update-vwv99" event={"ID":"87604619-ec13-480d-9456-c5062685287d","Type":"ContainerStarted","Data":"459c3dccc59cc9a7a25c5999d57a9a40f0164adf154409ca83fde7883515bc8e"} Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.235539 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-59fc-account-create-update-29wvh" event={"ID":"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb","Type":"ContainerStarted","Data":"2773d6c651dc9198f969541910251805f8f5c022eea560495fc1c18971e69a25"} Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.237062 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mvrlh" event={"ID":"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f","Type":"ContainerStarted","Data":"c059c2685304c1bbe7cb0f5bf34e25f7ad09441877372a8258fba3155ab8954a"} Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.239012 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b89-account-create-update-x8q45" event={"ID":"b9e55f2d-153c-47a0-95c4-84f8795ca57e","Type":"ContainerStarted","Data":"f872729a376e1215c5740d39fd4f7a50b0af6397c21fc8ffd7ccc6655e4b95bf"} Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.241286 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-czsfv" event={"ID":"3f413eca-d25a-4b47-82f6-e25088b65f2d","Type":"ContainerStarted","Data":"0f464cbb217061ebc522def4ee4f957ba0ab7d4cce7acd97b0cad24b01e4e4cc"} Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.266119 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-f0db-account-create-update-vwv99" podStartSLOduration=3.266099768 podStartE2EDuration="3.266099768s" podCreationTimestamp="2026-02-18 11:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:32.260082435 +0000 UTC m=+1073.987786515" watchObservedRunningTime="2026-02-18 11:54:32.266099768 +0000 UTC m=+1073.993803848" Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.280810 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b89-account-create-update-x8q45" podStartSLOduration=3.280793141 podStartE2EDuration="3.280793141s" podCreationTimestamp="2026-02-18 11:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:32.278159324 +0000 UTC m=+1074.005863404" watchObservedRunningTime="2026-02-18 11:54:32.280793141 +0000 UTC m=+1074.008497221" Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.308457 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mvrlh" podStartSLOduration=3.308440003 podStartE2EDuration="3.308440003s" podCreationTimestamp="2026-02-18 11:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:32.304876393 +0000 UTC m=+1074.032580483" watchObservedRunningTime="2026-02-18 11:54:32.308440003 +0000 UTC m=+1074.036144083" Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.341010 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-czsfv" podStartSLOduration=3.34098785 podStartE2EDuration="3.34098785s" podCreationTimestamp="2026-02-18 11:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:32.327741574 +0000 UTC m=+1074.055445654" watchObservedRunningTime="2026-02-18 11:54:32.34098785 +0000 UTC m=+1074.068691930" Feb 18 11:54:32 crc kubenswrapper[4922]: I0218 11:54:32.353005 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-59fc-account-create-update-29wvh" podStartSLOduration=3.352983595 podStartE2EDuration="3.352983595s" podCreationTimestamp="2026-02-18 11:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:32.346852819 +0000 UTC m=+1074.074556909" watchObservedRunningTime="2026-02-18 11:54:32.352983595 +0000 UTC m=+1074.080687675" Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.250834 4922 generic.go:334] "Generic (PLEG): container finished" podID="87604619-ec13-480d-9456-c5062685287d" containerID="459c3dccc59cc9a7a25c5999d57a9a40f0164adf154409ca83fde7883515bc8e" exitCode=0 Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.251152 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f0db-account-create-update-vwv99" event={"ID":"87604619-ec13-480d-9456-c5062685287d","Type":"ContainerDied","Data":"459c3dccc59cc9a7a25c5999d57a9a40f0164adf154409ca83fde7883515bc8e"} Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.253652 4922 generic.go:334] "Generic (PLEG): container finished" podID="76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb" containerID="2773d6c651dc9198f969541910251805f8f5c022eea560495fc1c18971e69a25" exitCode=0 Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.253698 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-59fc-account-create-update-29wvh" event={"ID":"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb","Type":"ContainerDied","Data":"2773d6c651dc9198f969541910251805f8f5c022eea560495fc1c18971e69a25"} Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.255456 4922 generic.go:334] "Generic (PLEG): container finished" podID="46a88eb1-e58a-437e-b1eb-2dcb7e80b37f" containerID="c059c2685304c1bbe7cb0f5bf34e25f7ad09441877372a8258fba3155ab8954a" exitCode=0 Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.255495 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mvrlh" event={"ID":"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f","Type":"ContainerDied","Data":"c059c2685304c1bbe7cb0f5bf34e25f7ad09441877372a8258fba3155ab8954a"} Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.257283 4922 generic.go:334] "Generic (PLEG): container finished" podID="b9e55f2d-153c-47a0-95c4-84f8795ca57e" containerID="f872729a376e1215c5740d39fd4f7a50b0af6397c21fc8ffd7ccc6655e4b95bf" exitCode=0 Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.257324 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b89-account-create-update-x8q45" event={"ID":"b9e55f2d-153c-47a0-95c4-84f8795ca57e","Type":"ContainerDied","Data":"f872729a376e1215c5740d39fd4f7a50b0af6397c21fc8ffd7ccc6655e4b95bf"} Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.258890 4922 generic.go:334] "Generic (PLEG): container finished" podID="3f413eca-d25a-4b47-82f6-e25088b65f2d" containerID="0f464cbb217061ebc522def4ee4f957ba0ab7d4cce7acd97b0cad24b01e4e4cc" exitCode=0 Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.258939 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-czsfv" event={"ID":"3f413eca-d25a-4b47-82f6-e25088b65f2d","Type":"ContainerDied","Data":"0f464cbb217061ebc522def4ee4f957ba0ab7d4cce7acd97b0cad24b01e4e4cc"} Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.270094 4922 generic.go:334] "Generic (PLEG): container finished" podID="b521417c-1968-49ee-8435-9e44af7e8a52" containerID="cb46a05482d6c2364b368bb1c2e067b6de93db6a4072db86c206647939a79206" exitCode=0 Feb 18 11:54:33 crc kubenswrapper[4922]: I0218 11:54:33.270145 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g95qz" event={"ID":"b521417c-1968-49ee-8435-9e44af7e8a52","Type":"ContainerDied","Data":"cb46a05482d6c2364b368bb1c2e067b6de93db6a4072db86c206647939a79206"} Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.291110 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"cb3763398a9b0ccacee702791c679ac99f40356ddb1a484f571a730c4fc2b052"} Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.291156 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"22c5e506a27e803c8fdafc19727c85eb82a49af3fcf31981e10ac5d6c6be1f28"} Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.819565 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.874558 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.937283 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87604619-ec13-480d-9456-c5062685287d-operator-scripts\") pod \"87604619-ec13-480d-9456-c5062685287d\" (UID: \"87604619-ec13-480d-9456-c5062685287d\") " Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.937332 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cmr9\" (UniqueName: \"kubernetes.io/projected/87604619-ec13-480d-9456-c5062685287d-kube-api-access-2cmr9\") pod \"87604619-ec13-480d-9456-c5062685287d\" (UID: \"87604619-ec13-480d-9456-c5062685287d\") " Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.937377 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgkd4\" (UniqueName: \"kubernetes.io/projected/3f413eca-d25a-4b47-82f6-e25088b65f2d-kube-api-access-qgkd4\") pod \"3f413eca-d25a-4b47-82f6-e25088b65f2d\" (UID: \"3f413eca-d25a-4b47-82f6-e25088b65f2d\") " Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.937532 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f413eca-d25a-4b47-82f6-e25088b65f2d-operator-scripts\") pod \"3f413eca-d25a-4b47-82f6-e25088b65f2d\" (UID: \"3f413eca-d25a-4b47-82f6-e25088b65f2d\") " Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.937965 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87604619-ec13-480d-9456-c5062685287d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87604619-ec13-480d-9456-c5062685287d" (UID: "87604619-ec13-480d-9456-c5062685287d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.940076 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f413eca-d25a-4b47-82f6-e25088b65f2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f413eca-d25a-4b47-82f6-e25088b65f2d" (UID: "3f413eca-d25a-4b47-82f6-e25088b65f2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.944658 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f413eca-d25a-4b47-82f6-e25088b65f2d-kube-api-access-qgkd4" (OuterVolumeSpecName: "kube-api-access-qgkd4") pod "3f413eca-d25a-4b47-82f6-e25088b65f2d" (UID: "3f413eca-d25a-4b47-82f6-e25088b65f2d"). InnerVolumeSpecName "kube-api-access-qgkd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.946244 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87604619-ec13-480d-9456-c5062685287d-kube-api-access-2cmr9" (OuterVolumeSpecName: "kube-api-access-2cmr9") pod "87604619-ec13-480d-9456-c5062685287d" (UID: "87604619-ec13-480d-9456-c5062685287d"). InnerVolumeSpecName "kube-api-access-2cmr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.947615 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:34 crc kubenswrapper[4922]: I0218 11:54:34.948084 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.001785 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.038349 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.038519 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-operator-scripts\") pod \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\" (UID: \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\") " Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.038625 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkbqt\" (UniqueName: \"kubernetes.io/projected/b521417c-1968-49ee-8435-9e44af7e8a52-kube-api-access-wkbqt\") pod \"b521417c-1968-49ee-8435-9e44af7e8a52\" (UID: \"b521417c-1968-49ee-8435-9e44af7e8a52\") " Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.038787 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b521417c-1968-49ee-8435-9e44af7e8a52-operator-scripts\") pod \"b521417c-1968-49ee-8435-9e44af7e8a52\" (UID: \"b521417c-1968-49ee-8435-9e44af7e8a52\") " Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.038809 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65mhx\" (UniqueName: \"kubernetes.io/projected/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-kube-api-access-65mhx\") pod \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\" (UID: \"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb\") " Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.039279 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f413eca-d25a-4b47-82f6-e25088b65f2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.039300 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87604619-ec13-480d-9456-c5062685287d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.039312 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cmr9\" (UniqueName: \"kubernetes.io/projected/87604619-ec13-480d-9456-c5062685287d-kube-api-access-2cmr9\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.039325 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgkd4\" (UniqueName: \"kubernetes.io/projected/3f413eca-d25a-4b47-82f6-e25088b65f2d-kube-api-access-qgkd4\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.039856 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb" (UID: "76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.041055 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b521417c-1968-49ee-8435-9e44af7e8a52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b521417c-1968-49ee-8435-9e44af7e8a52" (UID: "b521417c-1968-49ee-8435-9e44af7e8a52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.067512 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b521417c-1968-49ee-8435-9e44af7e8a52-kube-api-access-wkbqt" (OuterVolumeSpecName: "kube-api-access-wkbqt") pod "b521417c-1968-49ee-8435-9e44af7e8a52" (UID: "b521417c-1968-49ee-8435-9e44af7e8a52"). InnerVolumeSpecName "kube-api-access-wkbqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.073973 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-kube-api-access-65mhx" (OuterVolumeSpecName: "kube-api-access-65mhx") pod "76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb" (UID: "76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb"). InnerVolumeSpecName "kube-api-access-65mhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.140851 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhhfx\" (UniqueName: \"kubernetes.io/projected/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-kube-api-access-qhhfx\") pod \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\" (UID: \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\") " Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.141021 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e55f2d-153c-47a0-95c4-84f8795ca57e-operator-scripts\") pod \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\" (UID: \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\") " Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.141080 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9djps\" (UniqueName: \"kubernetes.io/projected/b9e55f2d-153c-47a0-95c4-84f8795ca57e-kube-api-access-9djps\") pod \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\" (UID: \"b9e55f2d-153c-47a0-95c4-84f8795ca57e\") " Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.141583 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-operator-scripts\") pod \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\" (UID: \"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f\") " Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.142889 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkbqt\" (UniqueName: \"kubernetes.io/projected/b521417c-1968-49ee-8435-9e44af7e8a52-kube-api-access-wkbqt\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.142913 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b521417c-1968-49ee-8435-9e44af7e8a52-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.142927 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65mhx\" (UniqueName: \"kubernetes.io/projected/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-kube-api-access-65mhx\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.142940 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.143756 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46a88eb1-e58a-437e-b1eb-2dcb7e80b37f" (UID: "46a88eb1-e58a-437e-b1eb-2dcb7e80b37f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.147885 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-kube-api-access-qhhfx" (OuterVolumeSpecName: "kube-api-access-qhhfx") pod "46a88eb1-e58a-437e-b1eb-2dcb7e80b37f" (UID: "46a88eb1-e58a-437e-b1eb-2dcb7e80b37f"). InnerVolumeSpecName "kube-api-access-qhhfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.148391 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9e55f2d-153c-47a0-95c4-84f8795ca57e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9e55f2d-153c-47a0-95c4-84f8795ca57e" (UID: "b9e55f2d-153c-47a0-95c4-84f8795ca57e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.150940 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e55f2d-153c-47a0-95c4-84f8795ca57e-kube-api-access-9djps" (OuterVolumeSpecName: "kube-api-access-9djps") pod "b9e55f2d-153c-47a0-95c4-84f8795ca57e" (UID: "b9e55f2d-153c-47a0-95c4-84f8795ca57e"). InnerVolumeSpecName "kube-api-access-9djps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202072 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xhpvj"] Feb 18 11:54:35 crc kubenswrapper[4922]: E0218 11:54:35.202440 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87604619-ec13-480d-9456-c5062685287d" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202471 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="87604619-ec13-480d-9456-c5062685287d" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: E0218 11:54:35.202487 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202496 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: E0218 11:54:35.202512 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a88eb1-e58a-437e-b1eb-2dcb7e80b37f" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202521 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a88eb1-e58a-437e-b1eb-2dcb7e80b37f" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: E0218 11:54:35.202535 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f413eca-d25a-4b47-82f6-e25088b65f2d" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202545 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f413eca-d25a-4b47-82f6-e25088b65f2d" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: E0218 11:54:35.202557 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b521417c-1968-49ee-8435-9e44af7e8a52" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202563 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b521417c-1968-49ee-8435-9e44af7e8a52" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: E0218 11:54:35.202574 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e55f2d-153c-47a0-95c4-84f8795ca57e" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202580 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e55f2d-153c-47a0-95c4-84f8795ca57e" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202721 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a88eb1-e58a-437e-b1eb-2dcb7e80b37f" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202734 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b521417c-1968-49ee-8435-9e44af7e8a52" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202751 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e55f2d-153c-47a0-95c4-84f8795ca57e" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202763 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="87604619-ec13-480d-9456-c5062685287d" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202772 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb" containerName="mariadb-account-create-update" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.202781 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f413eca-d25a-4b47-82f6-e25088b65f2d" containerName="mariadb-database-create" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.203255 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.212350 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.216826 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xhpvj"] Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.245098 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.245156 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhhfx\" (UniqueName: \"kubernetes.io/projected/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f-kube-api-access-qhhfx\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.245173 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e55f2d-153c-47a0-95c4-84f8795ca57e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.245187 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9djps\" (UniqueName: \"kubernetes.io/projected/b9e55f2d-153c-47a0-95c4-84f8795ca57e-kube-api-access-9djps\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.315134 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-59fc-account-create-update-29wvh" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.315159 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-59fc-account-create-update-29wvh" event={"ID":"76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb","Type":"ContainerDied","Data":"1d81707c5bd2ed4c306d7d2993e386735f9bc3f51595e0bf23abec0c6fba5f1a"} Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.316761 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d81707c5bd2ed4c306d7d2993e386735f9bc3f51595e0bf23abec0c6fba5f1a" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.317981 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mvrlh" event={"ID":"46a88eb1-e58a-437e-b1eb-2dcb7e80b37f","Type":"ContainerDied","Data":"c03df322018b9138faee6254589c424bb6e088b4296997dd45244c942e4db7ab"} Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.318005 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c03df322018b9138faee6254589c424bb6e088b4296997dd45244c942e4db7ab" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.318047 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mvrlh" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.322977 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b89-account-create-update-x8q45" event={"ID":"b9e55f2d-153c-47a0-95c4-84f8795ca57e","Type":"ContainerDied","Data":"4d88076fc7a40e7ade149549d2f2ef5ee9743a2e42595b2835c326127fe730f0"} Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.323013 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d88076fc7a40e7ade149549d2f2ef5ee9743a2e42595b2835c326127fe730f0" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.323011 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b89-account-create-update-x8q45" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.328488 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"11f435f7a138ff88d942d18784f7daee9767063e1ca8c9d22468b614fb9184ae"} Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.328538 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"ecf7f29fde1a718be3812f570f693693cf5186801e692c9023c577d084285ea0"} Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.330321 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-czsfv" event={"ID":"3f413eca-d25a-4b47-82f6-e25088b65f2d","Type":"ContainerDied","Data":"284db3161b4e11bde939d2e242b4a6f4a60d3fc946f9e2e18054dbeb8b0bcecd"} Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.330347 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-czsfv" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.330382 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="284db3161b4e11bde939d2e242b4a6f4a60d3fc946f9e2e18054dbeb8b0bcecd" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.331803 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g95qz" event={"ID":"b521417c-1968-49ee-8435-9e44af7e8a52","Type":"ContainerDied","Data":"9b8999d1e83a0573e94d5cd921d708e3242c66fa9f1b39ddd8ee350a09e0dbf3"} Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.331835 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8999d1e83a0573e94d5cd921d708e3242c66fa9f1b39ddd8ee350a09e0dbf3" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.331898 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g95qz" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.335142 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f0db-account-create-update-vwv99" event={"ID":"87604619-ec13-480d-9456-c5062685287d","Type":"ContainerDied","Data":"2bdf174ced138c4cd3a885471b1652700fd63cb3ebd9aa23d03d3a168ff65eb0"} Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.335176 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bdf174ced138c4cd3a885471b1652700fd63cb3ebd9aa23d03d3a168ff65eb0" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.335247 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f0db-account-create-update-vwv99" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.346279 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpsmc\" (UniqueName: \"kubernetes.io/projected/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-kube-api-access-rpsmc\") pod \"root-account-create-update-xhpvj\" (UID: \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\") " pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.346356 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-operator-scripts\") pod \"root-account-create-update-xhpvj\" (UID: \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\") " pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.448296 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpsmc\" (UniqueName: \"kubernetes.io/projected/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-kube-api-access-rpsmc\") pod \"root-account-create-update-xhpvj\" (UID: \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\") " pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.448459 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-operator-scripts\") pod \"root-account-create-update-xhpvj\" (UID: \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\") " pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.449524 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-operator-scripts\") pod \"root-account-create-update-xhpvj\" (UID: \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\") " pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.469087 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpsmc\" (UniqueName: \"kubernetes.io/projected/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-kube-api-access-rpsmc\") pod \"root-account-create-update-xhpvj\" (UID: \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\") " pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:35 crc kubenswrapper[4922]: I0218 11:54:35.530759 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:36 crc kubenswrapper[4922]: I0218 11:54:36.345181 4922 generic.go:334] "Generic (PLEG): container finished" podID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerID="66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8" exitCode=0 Feb 18 11:54:36 crc kubenswrapper[4922]: I0218 11:54:36.345329 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerDied","Data":"66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8"} Feb 18 11:54:49 crc kubenswrapper[4922]: I0218 11:54:49.584288 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xhpvj"] Feb 18 11:54:52 crc kubenswrapper[4922]: W0218 11:54:52.573601 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f2ad2ed_7e29_4760_aa9f_e5bf6bc96056.slice/crio-cda5ec3708d7b2f4b55f59d5b6c60d14db4ff2054fe6b3b308bcdd72c7da1a97 WatchSource:0}: Error finding container cda5ec3708d7b2f4b55f59d5b6c60d14db4ff2054fe6b3b308bcdd72c7da1a97: Status 404 returned error can't find the container with id cda5ec3708d7b2f4b55f59d5b6c60d14db4ff2054fe6b3b308bcdd72c7da1a97 Feb 18 11:54:53 crc kubenswrapper[4922]: E0218 11:54:53.205091 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 18 11:54:53 crc kubenswrapper[4922]: E0218 11:54:53.205282 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmckq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-st9pz_openstack(855fb3ec-e473-4a99-a94f-cc96dda6d9c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:54:53 crc kubenswrapper[4922]: E0218 11:54:53.206766 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-st9pz" podUID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" Feb 18 11:54:53 crc kubenswrapper[4922]: I0218 11:54:53.207019 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 11:54:53 crc kubenswrapper[4922]: E0218 11:54:53.342296 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.158:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 11:54:53 crc kubenswrapper[4922]: E0218 11:54:53.342351 4922 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.158:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 11:54:53 crc kubenswrapper[4922]: E0218 11:54:53.342624 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.158:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxwsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-sznrv_openstack(5f835c05-4bbb-4678-9410-8523cf308f05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:54:53 crc kubenswrapper[4922]: E0218 11:54:53.343960 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-sznrv" podUID="5f835c05-4bbb-4678-9410-8523cf308f05" Feb 18 11:54:53 crc kubenswrapper[4922]: I0218 11:54:53.494799 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpvj" event={"ID":"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056","Type":"ContainerStarted","Data":"cda5ec3708d7b2f4b55f59d5b6c60d14db4ff2054fe6b3b308bcdd72c7da1a97"} Feb 18 11:54:53 crc kubenswrapper[4922]: E0218 11:54:53.497328 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.158:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-sznrv" podUID="5f835c05-4bbb-4678-9410-8523cf308f05" Feb 18 11:54:54 crc kubenswrapper[4922]: I0218 11:54:54.505798 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerStarted","Data":"6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b"} Feb 18 11:54:54 crc kubenswrapper[4922]: I0218 11:54:54.507604 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpvj" event={"ID":"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056","Type":"ContainerStarted","Data":"3b378847d4d22bec8f5e44e34e16024756ec3850a6ac282a1c69fb4c7cbed59f"} Feb 18 11:54:54 crc kubenswrapper[4922]: I0218 11:54:54.509313 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fnkcj" event={"ID":"2102ef9b-8151-4edf-8b43-7c4486203911","Type":"ContainerStarted","Data":"96a538608703e5f575eecc4d7943dd23e9a2c36f206b74a346320d63b212ab98"} Feb 18 11:54:54 crc kubenswrapper[4922]: I0218 11:54:54.513144 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"2d7ee715b5d934a60ccc4fa8fe636b8ae65649af013d76a0d7146d35621583f4"} Feb 18 11:54:54 crc kubenswrapper[4922]: I0218 11:54:54.513167 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"3dcd45d219e5564a608d73b6a762cf85b1185176c70bb51d25995c0d9ffda06e"} Feb 18 11:54:54 crc kubenswrapper[4922]: I0218 11:54:54.529093 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-xhpvj" podStartSLOduration=19.529068552 podStartE2EDuration="19.529068552s" podCreationTimestamp="2026-02-18 11:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:54.524923287 +0000 UTC m=+1096.252627367" watchObservedRunningTime="2026-02-18 11:54:54.529068552 +0000 UTC m=+1096.256772632" Feb 18 11:54:54 crc kubenswrapper[4922]: I0218 11:54:54.549166 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fnkcj" podStartSLOduration=3.20674165 podStartE2EDuration="25.549148532s" podCreationTimestamp="2026-02-18 11:54:29 +0000 UTC" firstStartedPulling="2026-02-18 11:54:30.878506427 +0000 UTC m=+1072.606210507" lastFinishedPulling="2026-02-18 11:54:53.220913269 +0000 UTC m=+1094.948617389" observedRunningTime="2026-02-18 11:54:54.542924754 +0000 UTC m=+1096.270628834" watchObservedRunningTime="2026-02-18 11:54:54.549148532 +0000 UTC m=+1096.276852612" Feb 18 11:54:55 crc kubenswrapper[4922]: I0218 11:54:55.523289 4922 generic.go:334] "Generic (PLEG): container finished" podID="3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056" containerID="3b378847d4d22bec8f5e44e34e16024756ec3850a6ac282a1c69fb4c7cbed59f" exitCode=0 Feb 18 11:54:55 crc kubenswrapper[4922]: I0218 11:54:55.523609 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpvj" event={"ID":"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056","Type":"ContainerDied","Data":"3b378847d4d22bec8f5e44e34e16024756ec3850a6ac282a1c69fb4c7cbed59f"} Feb 18 11:54:55 crc kubenswrapper[4922]: I0218 11:54:55.529725 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"702a0b64ae0a6f1df760160dcae295326177206b6911d0b37de4d77ff3643292"} Feb 18 11:54:55 crc kubenswrapper[4922]: I0218 11:54:55.529807 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"3f83a7028420b34f585a6a85938c72318f7af775dde1cb5b8b3d7caac762d7fd"} Feb 18 11:54:56 crc kubenswrapper[4922]: I0218 11:54:56.844194 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:56 crc kubenswrapper[4922]: I0218 11:54:56.975540 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpsmc\" (UniqueName: \"kubernetes.io/projected/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-kube-api-access-rpsmc\") pod \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\" (UID: \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\") " Feb 18 11:54:56 crc kubenswrapper[4922]: I0218 11:54:56.975630 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-operator-scripts\") pod \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\" (UID: \"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056\") " Feb 18 11:54:56 crc kubenswrapper[4922]: I0218 11:54:56.976163 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056" (UID: "3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:56 crc kubenswrapper[4922]: I0218 11:54:56.976888 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:56 crc kubenswrapper[4922]: I0218 11:54:56.984566 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-kube-api-access-rpsmc" (OuterVolumeSpecName: "kube-api-access-rpsmc") pod "3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056" (UID: "3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056"). InnerVolumeSpecName "kube-api-access-rpsmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:57 crc kubenswrapper[4922]: I0218 11:54:57.080671 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpsmc\" (UniqueName: \"kubernetes.io/projected/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056-kube-api-access-rpsmc\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:57 crc kubenswrapper[4922]: I0218 11:54:57.551829 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerStarted","Data":"a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b"} Feb 18 11:54:57 crc kubenswrapper[4922]: I0218 11:54:57.552123 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerStarted","Data":"491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c"} Feb 18 11:54:57 crc kubenswrapper[4922]: I0218 11:54:57.553164 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xhpvj" event={"ID":"3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056","Type":"ContainerDied","Data":"cda5ec3708d7b2f4b55f59d5b6c60d14db4ff2054fe6b3b308bcdd72c7da1a97"} Feb 18 11:54:57 crc kubenswrapper[4922]: I0218 11:54:57.553191 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda5ec3708d7b2f4b55f59d5b6c60d14db4ff2054fe6b3b308bcdd72c7da1a97" Feb 18 11:54:57 crc kubenswrapper[4922]: I0218 11:54:57.553252 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xhpvj" Feb 18 11:54:57 crc kubenswrapper[4922]: I0218 11:54:57.589145 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=32.58912237 podStartE2EDuration="32.58912237s" podCreationTimestamp="2026-02-18 11:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:57.588687069 +0000 UTC m=+1099.316391169" watchObservedRunningTime="2026-02-18 11:54:57.58912237 +0000 UTC m=+1099.316826450" Feb 18 11:54:58 crc kubenswrapper[4922]: I0218 11:54:58.569245 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"8683da66dc56b9b992d4f71e8d035f8d162e0768fc93b7f6f278e9c7a3505316"} Feb 18 11:54:58 crc kubenswrapper[4922]: I0218 11:54:58.570307 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"409439d3613775a0a0cfc3cd788d4d9d8790e06d10bbfd2d942990950e52afee"} Feb 18 11:54:59 crc kubenswrapper[4922]: I0218 11:54:59.592780 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"defe5ce8a73e6664263abcc81c3e02689b14d12d9d5d4caf56ccaebce514415c"} Feb 18 11:54:59 crc kubenswrapper[4922]: I0218 11:54:59.593117 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"cf5b0c68a4a569fd7722160f29c535e4de61e15cc54c9dc86a3cb77f9958e9ab"} Feb 18 11:54:59 crc kubenswrapper[4922]: I0218 11:54:59.593129 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"7a73f4521bddd4bc67844b95a569e63aa1de2a3491f087efb5f6e1bc4176cf13"} Feb 18 11:55:00 crc kubenswrapper[4922]: I0218 11:55:00.835987 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 11:55:05 crc kubenswrapper[4922]: E0218 11:55:05.974075 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-st9pz" podUID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" Feb 18 11:55:07 crc kubenswrapper[4922]: I0218 11:55:07.666861 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"dabd0d755dd7bd6a2c1225baf5ee9fb272de42f7707117b64cebceea782eb8a5"} Feb 18 11:55:08 crc kubenswrapper[4922]: I0218 11:55:08.680120 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0771bdc1-7622-4a65-aa82-3150630ce652","Type":"ContainerStarted","Data":"7d6ea9e74abfefdd59a5398765c2930b709aecca0131c57ee9ddb2139819fc4b"} Feb 18 11:55:08 crc kubenswrapper[4922]: I0218 11:55:08.686269 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-sznrv" event={"ID":"5f835c05-4bbb-4678-9410-8523cf308f05","Type":"ContainerStarted","Data":"7b6ad8f3e92b6bbfd943139b3553e32910b71922866701c9884de500a3eaebeb"} Feb 18 11:55:08 crc kubenswrapper[4922]: I0218 11:55:08.737973 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.504890019 podStartE2EDuration="1m14.737954333s" podCreationTimestamp="2026-02-18 11:53:54 +0000 UTC" firstStartedPulling="2026-02-18 11:54:28.3676229 +0000 UTC m=+1070.095326980" lastFinishedPulling="2026-02-18 11:54:57.600687214 +0000 UTC m=+1099.328391294" observedRunningTime="2026-02-18 11:55:08.732334911 +0000 UTC m=+1110.460038991" watchObservedRunningTime="2026-02-18 11:55:08.737954333 +0000 UTC m=+1110.465658413" Feb 18 11:55:08 crc kubenswrapper[4922]: I0218 11:55:08.774875 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-sznrv" podStartSLOduration=2.26859052 podStartE2EDuration="39.774849496s" podCreationTimestamp="2026-02-18 11:54:29 +0000 UTC" firstStartedPulling="2026-02-18 11:54:30.475470208 +0000 UTC m=+1072.203174288" lastFinishedPulling="2026-02-18 11:55:07.981729184 +0000 UTC m=+1109.709433264" observedRunningTime="2026-02-18 11:55:08.758199545 +0000 UTC m=+1110.485903635" watchObservedRunningTime="2026-02-18 11:55:08.774849496 +0000 UTC m=+1110.502553576" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.048264 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zpx62"] Feb 18 11:55:09 crc kubenswrapper[4922]: E0218 11:55:09.048851 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056" containerName="mariadb-account-create-update" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.048870 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056" containerName="mariadb-account-create-update" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.049066 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056" containerName="mariadb-account-create-update" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.049982 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.052809 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.067433 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zpx62"] Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.153915 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.154005 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.154050 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn24f\" (UniqueName: \"kubernetes.io/projected/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-kube-api-access-xn24f\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.154075 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.154099 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.154120 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-config\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.255966 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.256077 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.256125 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn24f\" (UniqueName: \"kubernetes.io/projected/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-kube-api-access-xn24f\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.256151 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.256185 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.256215 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-config\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.257267 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.257270 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.257419 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.257696 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-config\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.257772 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.276150 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn24f\" (UniqueName: \"kubernetes.io/projected/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-kube-api-access-xn24f\") pod \"dnsmasq-dns-5c79d794d7-zpx62\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.368524 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:09 crc kubenswrapper[4922]: I0218 11:55:09.846484 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zpx62"] Feb 18 11:55:10 crc kubenswrapper[4922]: I0218 11:55:10.705953 4922 generic.go:334] "Generic (PLEG): container finished" podID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerID="de3eb0b81c2ecf51916540dbcf6e765b8396d655b971b5b5ee803c68d62a7d58" exitCode=0 Feb 18 11:55:10 crc kubenswrapper[4922]: I0218 11:55:10.706112 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" event={"ID":"df0f7ce3-64d4-45c3-b416-58e49b5b5bac","Type":"ContainerDied","Data":"de3eb0b81c2ecf51916540dbcf6e765b8396d655b971b5b5ee803c68d62a7d58"} Feb 18 11:55:10 crc kubenswrapper[4922]: I0218 11:55:10.706309 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" event={"ID":"df0f7ce3-64d4-45c3-b416-58e49b5b5bac","Type":"ContainerStarted","Data":"d6fd96a63f0eb8def4408247acfb363976fe91401640a5a1bfb2074f5c37e39c"} Feb 18 11:55:10 crc kubenswrapper[4922]: I0218 11:55:10.835892 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 11:55:10 crc kubenswrapper[4922]: I0218 11:55:10.844110 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 11:55:11 crc kubenswrapper[4922]: I0218 11:55:11.716547 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" event={"ID":"df0f7ce3-64d4-45c3-b416-58e49b5b5bac","Type":"ContainerStarted","Data":"b46ddfc217eb56701f6d0cade709faba90adfb7a9e5d395fb4ad4edb78eeacbf"} Feb 18 11:55:11 crc kubenswrapper[4922]: I0218 11:55:11.716882 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:11 crc kubenswrapper[4922]: I0218 11:55:11.725516 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 11:55:11 crc kubenswrapper[4922]: I0218 11:55:11.747969 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" podStartSLOduration=2.747951703 podStartE2EDuration="2.747951703s" podCreationTimestamp="2026-02-18 11:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:55:11.742181657 +0000 UTC m=+1113.469885777" watchObservedRunningTime="2026-02-18 11:55:11.747951703 +0000 UTC m=+1113.475655783" Feb 18 11:55:16 crc kubenswrapper[4922]: I0218 11:55:16.768030 4922 generic.go:334] "Generic (PLEG): container finished" podID="5f835c05-4bbb-4678-9410-8523cf308f05" containerID="7b6ad8f3e92b6bbfd943139b3553e32910b71922866701c9884de500a3eaebeb" exitCode=0 Feb 18 11:55:16 crc kubenswrapper[4922]: I0218 11:55:16.768258 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-sznrv" event={"ID":"5f835c05-4bbb-4678-9410-8523cf308f05","Type":"ContainerDied","Data":"7b6ad8f3e92b6bbfd943139b3553e32910b71922866701c9884de500a3eaebeb"} Feb 18 11:55:16 crc kubenswrapper[4922]: I0218 11:55:16.772187 4922 generic.go:334] "Generic (PLEG): container finished" podID="2102ef9b-8151-4edf-8b43-7c4486203911" containerID="96a538608703e5f575eecc4d7943dd23e9a2c36f206b74a346320d63b212ab98" exitCode=0 Feb 18 11:55:16 crc kubenswrapper[4922]: I0218 11:55:16.772228 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fnkcj" event={"ID":"2102ef9b-8151-4edf-8b43-7c4486203911","Type":"ContainerDied","Data":"96a538608703e5f575eecc4d7943dd23e9a2c36f206b74a346320d63b212ab98"} Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.227638 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-sznrv" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.236817 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.326865 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxwsh\" (UniqueName: \"kubernetes.io/projected/5f835c05-4bbb-4678-9410-8523cf308f05-kube-api-access-cxwsh\") pod \"5f835c05-4bbb-4678-9410-8523cf308f05\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.326962 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-combined-ca-bundle\") pod \"2102ef9b-8151-4edf-8b43-7c4486203911\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.327017 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-combined-ca-bundle\") pod \"5f835c05-4bbb-4678-9410-8523cf308f05\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.327099 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-db-sync-config-data\") pod \"5f835c05-4bbb-4678-9410-8523cf308f05\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.327127 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsps6\" (UniqueName: \"kubernetes.io/projected/2102ef9b-8151-4edf-8b43-7c4486203911-kube-api-access-bsps6\") pod \"2102ef9b-8151-4edf-8b43-7c4486203911\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.327983 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-config-data\") pod \"5f835c05-4bbb-4678-9410-8523cf308f05\" (UID: \"5f835c05-4bbb-4678-9410-8523cf308f05\") " Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.328015 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-config-data\") pod \"2102ef9b-8151-4edf-8b43-7c4486203911\" (UID: \"2102ef9b-8151-4edf-8b43-7c4486203911\") " Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.332882 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2102ef9b-8151-4edf-8b43-7c4486203911-kube-api-access-bsps6" (OuterVolumeSpecName: "kube-api-access-bsps6") pod "2102ef9b-8151-4edf-8b43-7c4486203911" (UID: "2102ef9b-8151-4edf-8b43-7c4486203911"). InnerVolumeSpecName "kube-api-access-bsps6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.335137 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5f835c05-4bbb-4678-9410-8523cf308f05" (UID: "5f835c05-4bbb-4678-9410-8523cf308f05"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.335428 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f835c05-4bbb-4678-9410-8523cf308f05-kube-api-access-cxwsh" (OuterVolumeSpecName: "kube-api-access-cxwsh") pod "5f835c05-4bbb-4678-9410-8523cf308f05" (UID: "5f835c05-4bbb-4678-9410-8523cf308f05"). InnerVolumeSpecName "kube-api-access-cxwsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.350488 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f835c05-4bbb-4678-9410-8523cf308f05" (UID: "5f835c05-4bbb-4678-9410-8523cf308f05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.360515 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2102ef9b-8151-4edf-8b43-7c4486203911" (UID: "2102ef9b-8151-4edf-8b43-7c4486203911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.374881 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-config-data" (OuterVolumeSpecName: "config-data") pod "2102ef9b-8151-4edf-8b43-7c4486203911" (UID: "2102ef9b-8151-4edf-8b43-7c4486203911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.385492 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-config-data" (OuterVolumeSpecName: "config-data") pod "5f835c05-4bbb-4678-9410-8523cf308f05" (UID: "5f835c05-4bbb-4678-9410-8523cf308f05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.429536 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.429578 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.429591 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxwsh\" (UniqueName: \"kubernetes.io/projected/5f835c05-4bbb-4678-9410-8523cf308f05-kube-api-access-cxwsh\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.429604 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2102ef9b-8151-4edf-8b43-7c4486203911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.429615 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.429626 4922 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f835c05-4bbb-4678-9410-8523cf308f05-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.429639 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsps6\" (UniqueName: \"kubernetes.io/projected/2102ef9b-8151-4edf-8b43-7c4486203911-kube-api-access-bsps6\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.788958 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-sznrv" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.788956 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-sznrv" event={"ID":"5f835c05-4bbb-4678-9410-8523cf308f05","Type":"ContainerDied","Data":"64fe53faccd6664eee091113f52c4e7d0ab540979e620f3862763ef2ec700f6d"} Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.789647 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64fe53faccd6664eee091113f52c4e7d0ab540979e620f3862763ef2ec700f6d" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.790528 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fnkcj" event={"ID":"2102ef9b-8151-4edf-8b43-7c4486203911","Type":"ContainerDied","Data":"e7cf1c0a3ad45fa91a71f51e0bf0901154e6b84ba139717db0e095e711718562"} Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.790563 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7cf1c0a3ad45fa91a71f51e0bf0901154e6b84ba139717db0e095e711718562" Feb 18 11:55:18 crc kubenswrapper[4922]: I0218 11:55:18.790590 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fnkcj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.063515 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zpx62"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.063777 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" podUID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerName="dnsmasq-dns" containerID="cri-o://b46ddfc217eb56701f6d0cade709faba90adfb7a9e5d395fb4ad4edb78eeacbf" gracePeriod=10 Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.066542 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.101321 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-bms2q"] Feb 18 11:55:19 crc kubenswrapper[4922]: E0218 11:55:19.102297 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2102ef9b-8151-4edf-8b43-7c4486203911" containerName="keystone-db-sync" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.102315 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2102ef9b-8151-4edf-8b43-7c4486203911" containerName="keystone-db-sync" Feb 18 11:55:19 crc kubenswrapper[4922]: E0218 11:55:19.102331 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f835c05-4bbb-4678-9410-8523cf308f05" containerName="watcher-db-sync" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.102336 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f835c05-4bbb-4678-9410-8523cf308f05" containerName="watcher-db-sync" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.102504 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2102ef9b-8151-4edf-8b43-7c4486203911" containerName="keystone-db-sync" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.102517 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f835c05-4bbb-4678-9410-8523cf308f05" containerName="watcher-db-sync" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.103532 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.143293 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.143390 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.143428 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85mgs\" (UniqueName: \"kubernetes.io/projected/080a4dbf-a721-4b07-8c48-1ed03637a871-kube-api-access-85mgs\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.143503 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-config\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.143534 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.143554 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-svc\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.176314 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mqkkx"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.177407 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.194873 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.195070 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.195205 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.195405 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.195523 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r8rn4" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.221392 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-bms2q"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247542 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-config\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247593 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247620 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-svc\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247668 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-config-data\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247691 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-fernet-keys\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247738 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-combined-ca-bundle\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247769 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247801 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8x7\" (UniqueName: \"kubernetes.io/projected/7598fc1c-8735-4c0e-a095-f13117d3037e-kube-api-access-kj8x7\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247822 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247852 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-scripts\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247867 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85mgs\" (UniqueName: \"kubernetes.io/projected/080a4dbf-a721-4b07-8c48-1ed03637a871-kube-api-access-85mgs\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.247888 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-credential-keys\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.250856 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.250907 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.251491 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.252387 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-svc\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.265905 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mqkkx"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.266897 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-config\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.325283 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85mgs\" (UniqueName: \"kubernetes.io/projected/080a4dbf-a721-4b07-8c48-1ed03637a871-kube-api-access-85mgs\") pod \"dnsmasq-dns-5b868669f-bms2q\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.362694 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-config-data\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.362765 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-fernet-keys\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.362863 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-combined-ca-bundle\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.362996 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj8x7\" (UniqueName: \"kubernetes.io/projected/7598fc1c-8735-4c0e-a095-f13117d3037e-kube-api-access-kj8x7\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.363072 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-scripts\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.363120 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-credential-keys\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.386693 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" podUID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.391398 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-combined-ca-bundle\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.422190 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-credential-keys\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.423085 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-fernet-keys\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.423427 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-scripts\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.466919 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-config-data\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.467700 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.468926 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.470303 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj8x7\" (UniqueName: \"kubernetes.io/projected/7598fc1c-8735-4c0e-a095-f13117d3037e-kube-api-access-kj8x7\") pod \"keystone-bootstrap-mqkkx\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.470978 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-82jsz" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.471186 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.492364 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.561746 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.562440 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.563523 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.566725 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.619613 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.626459 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.634802 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zjb6x"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.638766 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.645033 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.660351 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zjb6x"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.660442 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.660533 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.672405 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.672440 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-569dfc4865-ndwdj"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.672500 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bmh7l" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.672589 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.674291 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.675935 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-logs\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.675986 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.676041 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-config-data\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.676072 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlfjr\" (UniqueName: \"kubernetes.io/projected/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-kube-api-access-dlfjr\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.676099 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lkg5\" (UniqueName: \"kubernetes.io/projected/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-kube-api-access-6lkg5\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.676181 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.676278 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.676344 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.676503 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-logs\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.681744 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.691654 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.691988 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-lqkqv" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.692182 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.692279 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.692379 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-569dfc4865-ndwdj"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.757829 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-clz29"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.758942 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.775817 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.776047 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pnzs4" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.776189 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.777843 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.777897 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-config-data\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.777924 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-logs\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.777953 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlfjr\" (UniqueName: \"kubernetes.io/projected/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-kube-api-access-dlfjr\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.777975 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lkg5\" (UniqueName: \"kubernetes.io/projected/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-kube-api-access-6lkg5\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778003 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-scripts\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778026 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-config-data\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778046 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bkw\" (UniqueName: \"kubernetes.io/projected/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-kube-api-access-k6bkw\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778082 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778126 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778153 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmgk\" (UniqueName: \"kubernetes.io/projected/4bcd3608-244b-44f0-be1f-5d953cd35964-kube-api-access-kvmgk\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778184 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-config-data\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778211 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778464 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbxnl\" (UniqueName: \"kubernetes.io/projected/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-kube-api-access-lbxnl\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778638 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778760 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-logs\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778791 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-combined-ca-bundle\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-logs\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778887 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-logs\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778913 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778934 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-config\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.778967 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-horizon-secret-key\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.792056 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-logs\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.792139 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.792546 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-logs\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.803264 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.810086 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.811585 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.815478 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.818120 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-config-data\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.822423 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.823462 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rvpx7"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.824578 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.838842 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.839035 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xpg9l" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.840994 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.841140 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.841958 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlfjr\" (UniqueName: \"kubernetes.io/projected/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-kube-api-access-dlfjr\") pod \"watcher-applier-0\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " pod="openstack/watcher-applier-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.842121 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lkg5\" (UniqueName: \"kubernetes.io/projected/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-kube-api-access-6lkg5\") pod \"watcher-decision-engine-0\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.848993 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.854905 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.882846 4922 generic.go:334] "Generic (PLEG): container finished" podID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerID="b46ddfc217eb56701f6d0cade709faba90adfb7a9e5d395fb4ad4edb78eeacbf" exitCode=0 Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.882902 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" event={"ID":"df0f7ce3-64d4-45c3-b416-58e49b5b5bac","Type":"ContainerDied","Data":"b46ddfc217eb56701f6d0cade709faba90adfb7a9e5d395fb4ad4edb78eeacbf"} Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883315 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883352 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883408 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-logs\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883429 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-run-httpd\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883450 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-logs\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883471 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-scripts\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883486 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-config-data\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883508 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8ft\" (UniqueName: \"kubernetes.io/projected/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-kube-api-access-5d8ft\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883536 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6bkw\" (UniqueName: \"kubernetes.io/projected/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-kube-api-access-k6bkw\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883589 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvmgk\" (UniqueName: \"kubernetes.io/projected/4bcd3608-244b-44f0-be1f-5d953cd35964-kube-api-access-kvmgk\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883619 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-scripts\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883643 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-config-data\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883677 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-config-data\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883707 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbxnl\" (UniqueName: \"kubernetes.io/projected/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-kube-api-access-lbxnl\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883731 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883767 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-config-data\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883788 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwrr\" (UniqueName: \"kubernetes.io/projected/a3873c9e-308f-46ea-ac8f-4ee78ca92235-kube-api-access-jmwrr\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883823 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-scripts\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883854 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-combined-ca-bundle\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883884 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-log-httpd\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883916 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-logs\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883938 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883954 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-combined-ca-bundle\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.883972 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-config\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.884003 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-horizon-secret-key\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.884176 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-logs\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.884953 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-scripts\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.887496 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-config-data\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.888638 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-logs\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.890817 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.909428 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-bms2q"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.925051 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.925532 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-horizon-secret-key\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.946336 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-config\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.948447 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-clz29"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.949109 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.949562 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-config-data\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.956199 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvmgk\" (UniqueName: \"kubernetes.io/projected/4bcd3608-244b-44f0-be1f-5d953cd35964-kube-api-access-kvmgk\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.956634 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbxnl\" (UniqueName: \"kubernetes.io/projected/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-kube-api-access-lbxnl\") pod \"horizon-569dfc4865-ndwdj\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.961573 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-mqfhx"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.962855 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.964302 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-combined-ca-bundle\") pod \"neutron-db-sync-zjb6x\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.965138 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6bkw\" (UniqueName: \"kubernetes.io/projected/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-kube-api-access-k6bkw\") pod \"watcher-api-0\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " pod="openstack/watcher-api-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.984112 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rvpx7"] Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.984936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-run-httpd\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.984979 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-logs\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985002 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8ft\" (UniqueName: \"kubernetes.io/projected/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-kube-api-access-5d8ft\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985034 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-db-sync-config-data\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985071 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8nh\" (UniqueName: \"kubernetes.io/projected/d7852f85-b8c5-458e-901c-3659c5ed2713-kube-api-access-4h8nh\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985089 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-scripts\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985107 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7852f85-b8c5-458e-901c-3659c5ed2713-etc-machine-id\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985135 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-config-data\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985167 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-config-data\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985184 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwrr\" (UniqueName: \"kubernetes.io/projected/a3873c9e-308f-46ea-ac8f-4ee78ca92235-kube-api-access-jmwrr\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985206 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-scripts\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985225 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-scripts\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985249 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-combined-ca-bundle\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985273 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-log-httpd\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985298 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985315 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-combined-ca-bundle\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985337 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985379 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-config-data\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.985810 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-run-httpd\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.986050 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-logs\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:19 crc kubenswrapper[4922]: I0218 11:55:19.991391 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-log-httpd\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.003245 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.003439 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pktbk" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.004039 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.019336 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-scripts\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.023234 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mqfhx"] Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.026265 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.044863 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2ktqs"] Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.046201 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.055428 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-97df67fc7-qxhz9"] Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.057116 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.078665 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.087931 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-725mj\" (UniqueName: \"kubernetes.io/projected/31aad152-dcb7-472f-a0f8-d90ae972442b-kube-api-access-725mj\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.087971 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-config-data\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.088019 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-db-sync-config-data\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.088039 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-db-sync-config-data\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.088077 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h8nh\" (UniqueName: \"kubernetes.io/projected/d7852f85-b8c5-458e-901c-3659c5ed2713-kube-api-access-4h8nh\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.088098 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7852f85-b8c5-458e-901c-3659c5ed2713-etc-machine-id\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.088114 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-combined-ca-bundle\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.088186 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-scripts\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.088210 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-combined-ca-bundle\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.088937 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7852f85-b8c5-458e-901c-3659c5ed2713-etc-machine-id\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.092679 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2ktqs"] Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.115052 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.117196 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-combined-ca-bundle\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.123572 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.124571 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-config-data\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.124895 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-scripts\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.130149 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.134573 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-config-data\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.138211 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8ft\" (UniqueName: \"kubernetes.io/projected/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-kube-api-access-5d8ft\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.151207 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-combined-ca-bundle\") pod \"placement-db-sync-clz29\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " pod="openstack/placement-db-sync-clz29" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.168874 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-scripts\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.169721 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwrr\" (UniqueName: \"kubernetes.io/projected/a3873c9e-308f-46ea-ac8f-4ee78ca92235-kube-api-access-jmwrr\") pod \"ceilometer-0\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " pod="openstack/ceilometer-0" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.192278 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-db-sync-config-data\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.192332 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-config-data\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.193718 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-config-data\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.193823 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20b58cdc-a36d-4a63-b86d-474dac5d4566-horizon-secret-key\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.193887 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-scripts\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.193924 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.193949 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24jm\" (UniqueName: \"kubernetes.io/projected/20b58cdc-a36d-4a63-b86d-474dac5d4566-kube-api-access-p24jm\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194001 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194094 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-db-sync-config-data\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194173 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-config\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194227 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-combined-ca-bundle\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194302 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-97df67fc7-qxhz9"] Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194349 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b58cdc-a36d-4a63-b86d-474dac5d4566-logs\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194585 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194666 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8kkt\" (UniqueName: \"kubernetes.io/projected/24bbb94b-821e-4c8c-ae27-356f296903bf-kube-api-access-x8kkt\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194716 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-svc\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.194773 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-725mj\" (UniqueName: \"kubernetes.io/projected/31aad152-dcb7-472f-a0f8-d90ae972442b-kube-api-access-725mj\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.223011 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-combined-ca-bundle\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.223461 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-db-sync-config-data\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.226107 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-725mj\" (UniqueName: \"kubernetes.io/projected/31aad152-dcb7-472f-a0f8-d90ae972442b-kube-api-access-725mj\") pod \"barbican-db-sync-mqfhx\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.226329 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h8nh\" (UniqueName: \"kubernetes.io/projected/d7852f85-b8c5-458e-901c-3659c5ed2713-kube-api-access-4h8nh\") pod \"cinder-db-sync-rvpx7\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.310559 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b58cdc-a36d-4a63-b86d-474dac5d4566-logs\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.312198 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.312399 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8kkt\" (UniqueName: \"kubernetes.io/projected/24bbb94b-821e-4c8c-ae27-356f296903bf-kube-api-access-x8kkt\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.312565 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-svc\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.312734 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-config-data\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.312987 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20b58cdc-a36d-4a63-b86d-474dac5d4566-horizon-secret-key\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.313145 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-scripts\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.313261 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.313352 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24jm\" (UniqueName: \"kubernetes.io/projected/20b58cdc-a36d-4a63-b86d-474dac5d4566-kube-api-access-p24jm\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.313476 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.313620 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-config\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.314189 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-svc\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.314230 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.311532 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b58cdc-a36d-4a63-b86d-474dac5d4566-logs\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.315614 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.315776 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-scripts\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.315999 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-config-data\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.317164 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.320772 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-config\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.331758 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20b58cdc-a36d-4a63-b86d-474dac5d4566-horizon-secret-key\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.361892 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24jm\" (UniqueName: \"kubernetes.io/projected/20b58cdc-a36d-4a63-b86d-474dac5d4566-kube-api-access-p24jm\") pod \"horizon-97df67fc7-qxhz9\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:20 crc kubenswrapper[4922]: I0218 11:55:20.362999 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8kkt\" (UniqueName: \"kubernetes.io/projected/24bbb94b-821e-4c8c-ae27-356f296903bf-kube-api-access-x8kkt\") pod \"dnsmasq-dns-cf78879c9-2ktqs\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:20.411691 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-clz29" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:20.464241 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:20.473819 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:20.658270 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:20.679296 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:20.688431 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:20.895919 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-st9pz" event={"ID":"855fb3ec-e473-4a99-a94f-cc96dda6d9c4","Type":"ContainerStarted","Data":"f6b3d065bfd75b2cfcd638eb81caf2052d3e13f6ce152807bf20c4d48f24622e"} Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.494112 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.562434 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-569dfc4865-ndwdj"] Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.637459 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b45644857-ghjwx"] Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.638869 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.673324 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b45644857-ghjwx"] Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.741528 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-scripts\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.741608 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-config-data\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.741647 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92ff9b70-4f7e-43b8-b270-3470a18fcbda-logs\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.741666 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmpjz\" (UniqueName: \"kubernetes.io/projected/92ff9b70-4f7e-43b8-b270-3470a18fcbda-kube-api-access-gmpjz\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.741728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92ff9b70-4f7e-43b8-b270-3470a18fcbda-horizon-secret-key\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.844002 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92ff9b70-4f7e-43b8-b270-3470a18fcbda-logs\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.844072 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmpjz\" (UniqueName: \"kubernetes.io/projected/92ff9b70-4f7e-43b8-b270-3470a18fcbda-kube-api-access-gmpjz\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.844198 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92ff9b70-4f7e-43b8-b270-3470a18fcbda-horizon-secret-key\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.844273 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-scripts\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.844804 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-config-data\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.845074 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92ff9b70-4f7e-43b8-b270-3470a18fcbda-logs\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.845428 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-scripts\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.846089 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-config-data\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.851015 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92ff9b70-4f7e-43b8-b270-3470a18fcbda-horizon-secret-key\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.873135 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmpjz\" (UniqueName: \"kubernetes.io/projected/92ff9b70-4f7e-43b8-b270-3470a18fcbda-kube-api-access-gmpjz\") pod \"horizon-5b45644857-ghjwx\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:21.958028 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.790223 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-st9pz" podStartSLOduration=6.351904296 podStartE2EDuration="1m15.790202329s" podCreationTimestamp="2026-02-18 11:54:07 +0000 UTC" firstStartedPulling="2026-02-18 11:54:09.401080259 +0000 UTC m=+1051.128784339" lastFinishedPulling="2026-02-18 11:55:18.839378292 +0000 UTC m=+1120.567082372" observedRunningTime="2026-02-18 11:55:21.953404872 +0000 UTC m=+1123.681108952" watchObservedRunningTime="2026-02-18 11:55:22.790202329 +0000 UTC m=+1124.517906399" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.806633 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.902917 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.938466 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" event={"ID":"df0f7ce3-64d4-45c3-b416-58e49b5b5bac","Type":"ContainerDied","Data":"d6fd96a63f0eb8def4408247acfb363976fe91401640a5a1bfb2074f5c37e39c"} Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.938537 4922 scope.go:117] "RemoveContainer" containerID="b46ddfc217eb56701f6d0cade709faba90adfb7a9e5d395fb4ad4edb78eeacbf" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.938546 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zpx62" Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.973093 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-swift-storage-0\") pod \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.973242 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn24f\" (UniqueName: \"kubernetes.io/projected/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-kube-api-access-xn24f\") pod \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.973283 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-config\") pod \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.973351 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-svc\") pod \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.973450 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-sb\") pod \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.973537 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-nb\") pod \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\" (UID: \"df0f7ce3-64d4-45c3-b416-58e49b5b5bac\") " Feb 18 11:55:22 crc kubenswrapper[4922]: I0218 11:55:22.991772 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-kube-api-access-xn24f" (OuterVolumeSpecName: "kube-api-access-xn24f") pod "df0f7ce3-64d4-45c3-b416-58e49b5b5bac" (UID: "df0f7ce3-64d4-45c3-b416-58e49b5b5bac"). InnerVolumeSpecName "kube-api-access-xn24f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.033683 4922 scope.go:117] "RemoveContainer" containerID="de3eb0b81c2ecf51916540dbcf6e765b8396d655b971b5b5ee803c68d62a7d58" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.080075 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn24f\" (UniqueName: \"kubernetes.io/projected/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-kube-api-access-xn24f\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:23 crc kubenswrapper[4922]: W0218 11:55:23.090673 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod080a4dbf_a721_4b07_8c48_1ed03637a871.slice/crio-953a10f63b24926b9e9c17b705d09c9065c66847ee05a8c88823e22fb95fe0b7 WatchSource:0}: Error finding container 953a10f63b24926b9e9c17b705d09c9065c66847ee05a8c88823e22fb95fe0b7: Status 404 returned error can't find the container with id 953a10f63b24926b9e9c17b705d09c9065c66847ee05a8c88823e22fb95fe0b7 Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.103333 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df0f7ce3-64d4-45c3-b416-58e49b5b5bac" (UID: "df0f7ce3-64d4-45c3-b416-58e49b5b5bac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.114880 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df0f7ce3-64d4-45c3-b416-58e49b5b5bac" (UID: "df0f7ce3-64d4-45c3-b416-58e49b5b5bac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.120749 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df0f7ce3-64d4-45c3-b416-58e49b5b5bac" (UID: "df0f7ce3-64d4-45c3-b416-58e49b5b5bac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.128196 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df0f7ce3-64d4-45c3-b416-58e49b5b5bac" (UID: "df0f7ce3-64d4-45c3-b416-58e49b5b5bac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.128404 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-config" (OuterVolumeSpecName: "config") pod "df0f7ce3-64d4-45c3-b416-58e49b5b5bac" (UID: "df0f7ce3-64d4-45c3-b416-58e49b5b5bac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.140602 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-bms2q"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.182022 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.182071 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.182088 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.182098 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.182106 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f7ce3-64d4-45c3-b416-58e49b5b5bac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.202723 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:55:23 crc kubenswrapper[4922]: W0218 11:55:23.207553 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e9fc515_5e15_41fc_8e76_b9f3af099a0f.slice/crio-089198a21158f6f6b56f6630ad31cc5105f4b0001f6d2937e2ed72533d2aaec8 WatchSource:0}: Error finding container 089198a21158f6f6b56f6630ad31cc5105f4b0001f6d2937e2ed72533d2aaec8: Status 404 returned error can't find the container with id 089198a21158f6f6b56f6630ad31cc5105f4b0001f6d2937e2ed72533d2aaec8 Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.218657 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.229278 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mqkkx"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.313187 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zpx62"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.335863 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zpx62"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.520475 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-clz29"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.532311 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zjb6x"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.558735 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-97df67fc7-qxhz9"] Feb 18 11:55:23 crc kubenswrapper[4922]: W0218 11:55:23.585160 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31aad152_dcb7_472f_a0f8_d90ae972442b.slice/crio-9b5d6cd64e7bf271ae9d437b3dc8665c7fa56f4097bd261a86583c671e88f922 WatchSource:0}: Error finding container 9b5d6cd64e7bf271ae9d437b3dc8665c7fa56f4097bd261a86583c671e88f922: Status 404 returned error can't find the container with id 9b5d6cd64e7bf271ae9d437b3dc8665c7fa56f4097bd261a86583c671e88f922 Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.594808 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mqfhx"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.604460 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rvpx7"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.613749 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b45644857-ghjwx"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.622977 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-569dfc4865-ndwdj"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.631872 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.727052 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2ktqs"] Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.737875 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:55:23 crc kubenswrapper[4922]: W0218 11:55:23.760204 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c614b6a_8d46_4a07_89f6_1a7cc64dfdad.slice/crio-c1680a690b411a1b27ee20c720d7966628a4b267d54560c222bee5a47c521f74 WatchSource:0}: Error finding container c1680a690b411a1b27ee20c720d7966628a4b267d54560c222bee5a47c521f74: Status 404 returned error can't find the container with id c1680a690b411a1b27ee20c720d7966628a4b267d54560c222bee5a47c521f74 Feb 18 11:55:23 crc kubenswrapper[4922]: W0218 11:55:23.762374 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24bbb94b_821e_4c8c_ae27_356f296903bf.slice/crio-d52ba93a2e395a5fe3e842c514b38b9be3486416e78d8b85381f805174f39d21 WatchSource:0}: Error finding container d52ba93a2e395a5fe3e842c514b38b9be3486416e78d8b85381f805174f39d21: Status 404 returned error can't find the container with id d52ba93a2e395a5fe3e842c514b38b9be3486416e78d8b85381f805174f39d21 Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.954170 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b45644857-ghjwx" event={"ID":"92ff9b70-4f7e-43b8-b270-3470a18fcbda","Type":"ContainerStarted","Data":"bea8a00a01eb30a986c3d9e163f1f6f16bd92caadc91d9f743ebfba2b622cd5d"} Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.956520 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" event={"ID":"24bbb94b-821e-4c8c-ae27-356f296903bf","Type":"ContainerStarted","Data":"d52ba93a2e395a5fe3e842c514b38b9be3486416e78d8b85381f805174f39d21"} Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.960649 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9e9fc515-5e15-41fc-8e76-b9f3af099a0f","Type":"ContainerStarted","Data":"089198a21158f6f6b56f6630ad31cc5105f4b0001f6d2937e2ed72533d2aaec8"} Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.967727 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mqfhx" event={"ID":"31aad152-dcb7-472f-a0f8-d90ae972442b","Type":"ContainerStarted","Data":"9b5d6cd64e7bf271ae9d437b3dc8665c7fa56f4097bd261a86583c671e88f922"} Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.979569 4922 generic.go:334] "Generic (PLEG): container finished" podID="080a4dbf-a721-4b07-8c48-1ed03637a871" containerID="62ed2a0efd29b6e0c16d2156abed1ba070dd313cfdfdd01c3fa25fadbcf98e85" exitCode=0 Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.979665 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-bms2q" event={"ID":"080a4dbf-a721-4b07-8c48-1ed03637a871","Type":"ContainerDied","Data":"62ed2a0efd29b6e0c16d2156abed1ba070dd313cfdfdd01c3fa25fadbcf98e85"} Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.979693 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-bms2q" event={"ID":"080a4dbf-a721-4b07-8c48-1ed03637a871","Type":"ContainerStarted","Data":"953a10f63b24926b9e9c17b705d09c9065c66847ee05a8c88823e22fb95fe0b7"} Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.983390 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad","Type":"ContainerStarted","Data":"c1680a690b411a1b27ee20c720d7966628a4b267d54560c222bee5a47c521f74"} Feb 18 11:55:23 crc kubenswrapper[4922]: I0218 11:55:23.987160 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3873c9e-308f-46ea-ac8f-4ee78ca92235","Type":"ContainerStarted","Data":"d077394e189534489b8a5cebe609760985017f8cfefceb856dec5e4e90cc20e1"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.011556 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c7e64f4b-f6d6-43b3-aeab-47a0da094a84","Type":"ContainerStarted","Data":"45913156357b5b17fd4ecdb6fa34c39561bf28bcfc8935781691cdc1a3026c9b"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.011639 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c7e64f4b-f6d6-43b3-aeab-47a0da094a84","Type":"ContainerStarted","Data":"5443076c45ea0609742b231b2d38cf6b7ce01d78bf985c32d5e9cd70f2e17de2"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.015968 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-clz29" event={"ID":"71318c6d-61ee-4fb4-8682-7cf3fc0ae044","Type":"ContainerStarted","Data":"c85150d5f0f00107589b346911782e9cacbfd0c2f75cf23b1582f2bb391c607c"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.024661 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zjb6x" event={"ID":"4bcd3608-244b-44f0-be1f-5d953cd35964","Type":"ContainerStarted","Data":"4f713717bbd69a1844002c6344555c40f26be59a2b8b6c3086945e62b2e3a5ca"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.050340 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mqkkx" event={"ID":"7598fc1c-8735-4c0e-a095-f13117d3037e","Type":"ContainerStarted","Data":"ffcb369954fde5f2470b893096827224a7452c99e4a60328dd0f303414ba87d8"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.050444 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mqkkx" event={"ID":"7598fc1c-8735-4c0e-a095-f13117d3037e","Type":"ContainerStarted","Data":"5cd177217aaeb4ee4e85b6aadff7f1be10663fa2d35eb9127c500266083c678f"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.070741 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvpx7" event={"ID":"d7852f85-b8c5-458e-901c-3659c5ed2713","Type":"ContainerStarted","Data":"b5010985da36e7523bd0bc3fdfdcc8c443c58a1cc63438b6b3af9b7f64ca52d5"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.078716 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97df67fc7-qxhz9" event={"ID":"20b58cdc-a36d-4a63-b86d-474dac5d4566","Type":"ContainerStarted","Data":"45548fdfaf9ace756d38cd2bba939d9053aaab1d8d9828430bf80dd188bf6b85"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.079279 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mqkkx" podStartSLOduration=5.079256659 podStartE2EDuration="5.079256659s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:55:24.075433292 +0000 UTC m=+1125.803137372" watchObservedRunningTime="2026-02-18 11:55:24.079256659 +0000 UTC m=+1125.806960749" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.081740 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-569dfc4865-ndwdj" event={"ID":"8d1daa08-43b5-47db-bd94-3efb0eb4dce2","Type":"ContainerStarted","Data":"de849e1d2ee9efed619f5a1fae0183071acffda1776ddbf93a3e733f6554e51b"} Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.466574 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.523215 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-svc\") pod \"080a4dbf-a721-4b07-8c48-1ed03637a871\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.523310 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-swift-storage-0\") pod \"080a4dbf-a721-4b07-8c48-1ed03637a871\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.523374 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-nb\") pod \"080a4dbf-a721-4b07-8c48-1ed03637a871\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.523460 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-sb\") pod \"080a4dbf-a721-4b07-8c48-1ed03637a871\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.523536 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85mgs\" (UniqueName: \"kubernetes.io/projected/080a4dbf-a721-4b07-8c48-1ed03637a871-kube-api-access-85mgs\") pod \"080a4dbf-a721-4b07-8c48-1ed03637a871\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.523588 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-config\") pod \"080a4dbf-a721-4b07-8c48-1ed03637a871\" (UID: \"080a4dbf-a721-4b07-8c48-1ed03637a871\") " Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.550538 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "080a4dbf-a721-4b07-8c48-1ed03637a871" (UID: "080a4dbf-a721-4b07-8c48-1ed03637a871"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.550835 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080a4dbf-a721-4b07-8c48-1ed03637a871-kube-api-access-85mgs" (OuterVolumeSpecName: "kube-api-access-85mgs") pod "080a4dbf-a721-4b07-8c48-1ed03637a871" (UID: "080a4dbf-a721-4b07-8c48-1ed03637a871"). InnerVolumeSpecName "kube-api-access-85mgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.555852 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "080a4dbf-a721-4b07-8c48-1ed03637a871" (UID: "080a4dbf-a721-4b07-8c48-1ed03637a871"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.564336 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-config" (OuterVolumeSpecName: "config") pod "080a4dbf-a721-4b07-8c48-1ed03637a871" (UID: "080a4dbf-a721-4b07-8c48-1ed03637a871"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.588965 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "080a4dbf-a721-4b07-8c48-1ed03637a871" (UID: "080a4dbf-a721-4b07-8c48-1ed03637a871"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.589505 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "080a4dbf-a721-4b07-8c48-1ed03637a871" (UID: "080a4dbf-a721-4b07-8c48-1ed03637a871"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.629852 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85mgs\" (UniqueName: \"kubernetes.io/projected/080a4dbf-a721-4b07-8c48-1ed03637a871-kube-api-access-85mgs\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.629890 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.629901 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.629914 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.629925 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.629935 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080a4dbf-a721-4b07-8c48-1ed03637a871-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:55:24 crc kubenswrapper[4922]: I0218 11:55:24.990267 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" path="/var/lib/kubelet/pods/df0f7ce3-64d4-45c3-b416-58e49b5b5bac/volumes" Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.108429 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c7e64f4b-f6d6-43b3-aeab-47a0da094a84","Type":"ContainerStarted","Data":"580ec3bf82a2cc3d74061b0ae6a5ff2d1e8371f633928e9b01fc3ab3ece19784"} Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.108663 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api-log" containerID="cri-o://45913156357b5b17fd4ecdb6fa34c39561bf28bcfc8935781691cdc1a3026c9b" gracePeriod=30 Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.109311 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.109396 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" containerID="cri-o://580ec3bf82a2cc3d74061b0ae6a5ff2d1e8371f633928e9b01fc3ab3ece19784" gracePeriod=30 Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.134749 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-bms2q" event={"ID":"080a4dbf-a721-4b07-8c48-1ed03637a871","Type":"ContainerDied","Data":"953a10f63b24926b9e9c17b705d09c9065c66847ee05a8c88823e22fb95fe0b7"} Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.134811 4922 scope.go:117] "RemoveContainer" containerID="62ed2a0efd29b6e0c16d2156abed1ba070dd313cfdfdd01c3fa25fadbcf98e85" Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.134948 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-bms2q" Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.145059 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": EOF" Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.149158 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=6.149135978 podStartE2EDuration="6.149135978s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:55:25.134007356 +0000 UTC m=+1126.861711466" watchObservedRunningTime="2026-02-18 11:55:25.149135978 +0000 UTC m=+1126.876840058" Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.165191 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zjb6x" event={"ID":"4bcd3608-244b-44f0-be1f-5d953cd35964","Type":"ContainerStarted","Data":"b2705f7646a829371102cfea131123135c7490beec56768c960d999d9c5fa2de"} Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.216832 4922 generic.go:334] "Generic (PLEG): container finished" podID="24bbb94b-821e-4c8c-ae27-356f296903bf" containerID="7f1abe52f752c943d157e484e47d8d790a98ef3e4904432fd3c21618fac5c1e6" exitCode=0 Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.217387 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" event={"ID":"24bbb94b-821e-4c8c-ae27-356f296903bf","Type":"ContainerDied","Data":"7f1abe52f752c943d157e484e47d8d790a98ef3e4904432fd3c21618fac5c1e6"} Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.229064 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-bms2q"] Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.254082 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-bms2q"] Feb 18 11:55:25 crc kubenswrapper[4922]: I0218 11:55:25.265064 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zjb6x" podStartSLOduration=6.265046129 podStartE2EDuration="6.265046129s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:55:25.211520506 +0000 UTC m=+1126.939224576" watchObservedRunningTime="2026-02-18 11:55:25.265046129 +0000 UTC m=+1126.992750209" Feb 18 11:55:26 crc kubenswrapper[4922]: I0218 11:55:26.232837 4922 generic.go:334] "Generic (PLEG): container finished" podID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerID="45913156357b5b17fd4ecdb6fa34c39561bf28bcfc8935781691cdc1a3026c9b" exitCode=143 Feb 18 11:55:26 crc kubenswrapper[4922]: I0218 11:55:26.232921 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c7e64f4b-f6d6-43b3-aeab-47a0da094a84","Type":"ContainerDied","Data":"45913156357b5b17fd4ecdb6fa34c39561bf28bcfc8935781691cdc1a3026c9b"} Feb 18 11:55:26 crc kubenswrapper[4922]: I0218 11:55:26.984736 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080a4dbf-a721-4b07-8c48-1ed03637a871" path="/var/lib/kubelet/pods/080a4dbf-a721-4b07-8c48-1ed03637a871/volumes" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.008743 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-97df67fc7-qxhz9"] Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.048329 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9d79df67b-mg9kq"] Feb 18 11:55:28 crc kubenswrapper[4922]: E0218 11:55:28.048745 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080a4dbf-a721-4b07-8c48-1ed03637a871" containerName="init" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.048761 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="080a4dbf-a721-4b07-8c48-1ed03637a871" containerName="init" Feb 18 11:55:28 crc kubenswrapper[4922]: E0218 11:55:28.048779 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerName="dnsmasq-dns" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.048787 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerName="dnsmasq-dns" Feb 18 11:55:28 crc kubenswrapper[4922]: E0218 11:55:28.048809 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerName="init" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.048824 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerName="init" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.048986 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f7ce3-64d4-45c3-b416-58e49b5b5bac" containerName="dnsmasq-dns" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.049012 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="080a4dbf-a721-4b07-8c48-1ed03637a871" containerName="init" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.049938 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.054881 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.080002 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9d79df67b-mg9kq"] Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.126954 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b45644857-ghjwx"] Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.183177 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bbf5454f6-d5958"] Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.187196 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.196649 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bbf5454f6-d5958"] Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.214863 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-tls-certs\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.215017 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-scripts\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.215051 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-config-data\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.215096 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-combined-ca-bundle\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.215129 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-logs\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.215452 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-secret-key\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.215501 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvkcr\" (UniqueName: \"kubernetes.io/projected/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-kube-api-access-kvkcr\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.318311 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bc8759d-86ff-415d-936a-064ef742f0d9-config-data\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.318394 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvkcr\" (UniqueName: \"kubernetes.io/projected/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-kube-api-access-kvkcr\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.318573 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bc8759d-86ff-415d-936a-064ef742f0d9-logs\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.318676 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-tls-certs\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.320217 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-scripts\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.320267 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-config-data\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.320303 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-horizon-secret-key\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.320324 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-horizon-tls-certs\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.320379 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-combined-ca-bundle\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.320416 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-combined-ca-bundle\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.320646 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-logs\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.321161 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-logs\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.321469 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc8759d-86ff-415d-936a-064ef742f0d9-scripts\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.321525 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-secret-key\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.321554 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6qh\" (UniqueName: \"kubernetes.io/projected/3bc8759d-86ff-415d-936a-064ef742f0d9-kube-api-access-9r6qh\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.321657 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-config-data\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.321702 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-scripts\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.327937 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-secret-key\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.328038 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-tls-certs\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.328555 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-combined-ca-bundle\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.343608 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvkcr\" (UniqueName: \"kubernetes.io/projected/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-kube-api-access-kvkcr\") pod \"horizon-9d79df67b-mg9kq\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.387810 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.424097 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-horizon-secret-key\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.424155 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-horizon-tls-certs\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.424208 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-combined-ca-bundle\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.424307 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc8759d-86ff-415d-936a-064ef742f0d9-scripts\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.424347 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6qh\" (UniqueName: \"kubernetes.io/projected/3bc8759d-86ff-415d-936a-064ef742f0d9-kube-api-access-9r6qh\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.424407 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bc8759d-86ff-415d-936a-064ef742f0d9-config-data\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.424437 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bc8759d-86ff-415d-936a-064ef742f0d9-logs\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.424892 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bc8759d-86ff-415d-936a-064ef742f0d9-logs\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.425457 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc8759d-86ff-415d-936a-064ef742f0d9-scripts\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.433289 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bc8759d-86ff-415d-936a-064ef742f0d9-config-data\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.436336 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-horizon-tls-certs\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.437182 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-horizon-secret-key\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.437238 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc8759d-86ff-415d-936a-064ef742f0d9-combined-ca-bundle\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.446223 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6qh\" (UniqueName: \"kubernetes.io/projected/3bc8759d-86ff-415d-936a-064ef742f0d9-kube-api-access-9r6qh\") pod \"horizon-7bbf5454f6-d5958\" (UID: \"3bc8759d-86ff-415d-936a-064ef742f0d9\") " pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:28 crc kubenswrapper[4922]: I0218 11:55:28.522895 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:55:30 crc kubenswrapper[4922]: I0218 11:55:30.004772 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 11:55:31 crc kubenswrapper[4922]: I0218 11:55:31.542959 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": read tcp 10.217.0.2:55390->10.217.0.149:9322: read: connection reset by peer" Feb 18 11:55:32 crc kubenswrapper[4922]: I0218 11:55:32.303941 4922 generic.go:334] "Generic (PLEG): container finished" podID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerID="580ec3bf82a2cc3d74061b0ae6a5ff2d1e8371f633928e9b01fc3ab3ece19784" exitCode=0 Feb 18 11:55:32 crc kubenswrapper[4922]: I0218 11:55:32.304023 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c7e64f4b-f6d6-43b3-aeab-47a0da094a84","Type":"ContainerDied","Data":"580ec3bf82a2cc3d74061b0ae6a5ff2d1e8371f633928e9b01fc3ab3ece19784"} Feb 18 11:55:34 crc kubenswrapper[4922]: I0218 11:55:34.322332 4922 generic.go:334] "Generic (PLEG): container finished" podID="7598fc1c-8735-4c0e-a095-f13117d3037e" containerID="ffcb369954fde5f2470b893096827224a7452c99e4a60328dd0f303414ba87d8" exitCode=0 Feb 18 11:55:34 crc kubenswrapper[4922]: I0218 11:55:34.322412 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mqkkx" event={"ID":"7598fc1c-8735-4c0e-a095-f13117d3037e","Type":"ContainerDied","Data":"ffcb369954fde5f2470b893096827224a7452c99e4a60328dd0f303414ba87d8"} Feb 18 11:55:35 crc kubenswrapper[4922]: I0218 11:55:35.006183 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": dial tcp 10.217.0.149:9322: connect: connection refused" Feb 18 11:55:40 crc kubenswrapper[4922]: I0218 11:55:40.005539 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": dial tcp 10.217.0.149:9322: connect: connection refused" Feb 18 11:55:41 crc kubenswrapper[4922]: E0218 11:55:41.387542 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 11:55:41 crc kubenswrapper[4922]: E0218 11:55:41.388733 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h4h576hd5h66ch9bh58h8fh5ffh64fh545h5b6h5dbh58bh64h65bh67fh648h645h579hcdh68dh5cbh558h74h7fh5f5h5b8h5bfh555h5d5h689q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p24jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-97df67fc7-qxhz9_openstack(20b58cdc-a36d-4a63-b86d-474dac5d4566): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:55:41 crc kubenswrapper[4922]: E0218 11:55:41.391319 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-97df67fc7-qxhz9" podUID="20b58cdc-a36d-4a63-b86d-474dac5d4566" Feb 18 11:55:45 crc kubenswrapper[4922]: I0218 11:55:45.005975 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": dial tcp 10.217.0.149:9322: connect: connection refused" Feb 18 11:55:50 crc kubenswrapper[4922]: I0218 11:55:50.005715 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": dial tcp 10.217.0.149:9322: connect: connection refused" Feb 18 11:56:00 crc kubenswrapper[4922]: I0218 11:56:00.005883 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.007575 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:56:05 crc kubenswrapper[4922]: E0218 11:56:05.545355 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 11:56:05 crc kubenswrapper[4922]: E0218 11:56:05.545889 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-725mj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-mqfhx_openstack(31aad152-dcb7-472f-a0f8-d90ae972442b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:56:05 crc kubenswrapper[4922]: E0218 11:56:05.547039 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-mqfhx" podUID="31aad152-dcb7-472f-a0f8-d90ae972442b" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.600760 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.608815 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.609252 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mqkkx" event={"ID":"7598fc1c-8735-4c0e-a095-f13117d3037e","Type":"ContainerDied","Data":"5cd177217aaeb4ee4e85b6aadff7f1be10663fa2d35eb9127c500266083c678f"} Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.609288 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cd177217aaeb4ee4e85b6aadff7f1be10663fa2d35eb9127c500266083c678f" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.611415 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-97df67fc7-qxhz9" event={"ID":"20b58cdc-a36d-4a63-b86d-474dac5d4566","Type":"ContainerDied","Data":"45548fdfaf9ace756d38cd2bba939d9053aaab1d8d9828430bf80dd188bf6b85"} Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.611488 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-97df67fc7-qxhz9" Feb 18 11:56:05 crc kubenswrapper[4922]: E0218 11:56:05.612387 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-mqfhx" podUID="31aad152-dcb7-472f-a0f8-d90ae972442b" Feb 18 11:56:05 crc kubenswrapper[4922]: E0218 11:56:05.732918 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 11:56:05 crc kubenswrapper[4922]: E0218 11:56:05.733094 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n657h5b4h564hcfh5d8hch8bh699h86h5bch678h5b8hd7h675h84h658h76hd9h56dh675h58h5dfh68dh5c6hdhb5h5f7h54ch67bh654h57ch67dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gmpjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5b45644857-ghjwx_openstack(92ff9b70-4f7e-43b8-b270-3470a18fcbda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:56:05 crc kubenswrapper[4922]: E0218 11:56:05.736108 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5b45644857-ghjwx" podUID="92ff9b70-4f7e-43b8-b270-3470a18fcbda" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800007 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-scripts\") pod \"7598fc1c-8735-4c0e-a095-f13117d3037e\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800080 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-config-data\") pod \"20b58cdc-a36d-4a63-b86d-474dac5d4566\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800166 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-credential-keys\") pod \"7598fc1c-8735-4c0e-a095-f13117d3037e\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800208 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-scripts\") pod \"20b58cdc-a36d-4a63-b86d-474dac5d4566\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800247 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-config-data\") pod \"7598fc1c-8735-4c0e-a095-f13117d3037e\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800274 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-combined-ca-bundle\") pod \"7598fc1c-8735-4c0e-a095-f13117d3037e\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800325 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p24jm\" (UniqueName: \"kubernetes.io/projected/20b58cdc-a36d-4a63-b86d-474dac5d4566-kube-api-access-p24jm\") pod \"20b58cdc-a36d-4a63-b86d-474dac5d4566\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800417 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20b58cdc-a36d-4a63-b86d-474dac5d4566-horizon-secret-key\") pod \"20b58cdc-a36d-4a63-b86d-474dac5d4566\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800447 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj8x7\" (UniqueName: \"kubernetes.io/projected/7598fc1c-8735-4c0e-a095-f13117d3037e-kube-api-access-kj8x7\") pod \"7598fc1c-8735-4c0e-a095-f13117d3037e\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800488 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b58cdc-a36d-4a63-b86d-474dac5d4566-logs\") pod \"20b58cdc-a36d-4a63-b86d-474dac5d4566\" (UID: \"20b58cdc-a36d-4a63-b86d-474dac5d4566\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.800562 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-fernet-keys\") pod \"7598fc1c-8735-4c0e-a095-f13117d3037e\" (UID: \"7598fc1c-8735-4c0e-a095-f13117d3037e\") " Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.802822 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-config-data" (OuterVolumeSpecName: "config-data") pod "20b58cdc-a36d-4a63-b86d-474dac5d4566" (UID: "20b58cdc-a36d-4a63-b86d-474dac5d4566"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.803278 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b58cdc-a36d-4a63-b86d-474dac5d4566-logs" (OuterVolumeSpecName: "logs") pod "20b58cdc-a36d-4a63-b86d-474dac5d4566" (UID: "20b58cdc-a36d-4a63-b86d-474dac5d4566"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.812860 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-scripts" (OuterVolumeSpecName: "scripts") pod "20b58cdc-a36d-4a63-b86d-474dac5d4566" (UID: "20b58cdc-a36d-4a63-b86d-474dac5d4566"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.824871 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-scripts" (OuterVolumeSpecName: "scripts") pod "7598fc1c-8735-4c0e-a095-f13117d3037e" (UID: "7598fc1c-8735-4c0e-a095-f13117d3037e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.827135 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7598fc1c-8735-4c0e-a095-f13117d3037e" (UID: "7598fc1c-8735-4c0e-a095-f13117d3037e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.827316 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b58cdc-a36d-4a63-b86d-474dac5d4566-kube-api-access-p24jm" (OuterVolumeSpecName: "kube-api-access-p24jm") pod "20b58cdc-a36d-4a63-b86d-474dac5d4566" (UID: "20b58cdc-a36d-4a63-b86d-474dac5d4566"). InnerVolumeSpecName "kube-api-access-p24jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.829352 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7598fc1c-8735-4c0e-a095-f13117d3037e-kube-api-access-kj8x7" (OuterVolumeSpecName: "kube-api-access-kj8x7") pod "7598fc1c-8735-4c0e-a095-f13117d3037e" (UID: "7598fc1c-8735-4c0e-a095-f13117d3037e"). InnerVolumeSpecName "kube-api-access-kj8x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.829724 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b58cdc-a36d-4a63-b86d-474dac5d4566-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "20b58cdc-a36d-4a63-b86d-474dac5d4566" (UID: "20b58cdc-a36d-4a63-b86d-474dac5d4566"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.830121 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7598fc1c-8735-4c0e-a095-f13117d3037e" (UID: "7598fc1c-8735-4c0e-a095-f13117d3037e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.855756 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-config-data" (OuterVolumeSpecName: "config-data") pod "7598fc1c-8735-4c0e-a095-f13117d3037e" (UID: "7598fc1c-8735-4c0e-a095-f13117d3037e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.856627 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7598fc1c-8735-4c0e-a095-f13117d3037e" (UID: "7598fc1c-8735-4c0e-a095-f13117d3037e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.902509 4922 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903039 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903060 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903070 4922 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903081 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20b58cdc-a36d-4a63-b86d-474dac5d4566-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903089 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903098 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7598fc1c-8735-4c0e-a095-f13117d3037e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903107 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p24jm\" (UniqueName: \"kubernetes.io/projected/20b58cdc-a36d-4a63-b86d-474dac5d4566-kube-api-access-p24jm\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903116 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20b58cdc-a36d-4a63-b86d-474dac5d4566-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903124 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj8x7\" (UniqueName: \"kubernetes.io/projected/7598fc1c-8735-4c0e-a095-f13117d3037e-kube-api-access-kj8x7\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.903132 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b58cdc-a36d-4a63-b86d-474dac5d4566-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:05 crc kubenswrapper[4922]: I0218 11:56:05.995295 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-97df67fc7-qxhz9"] Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.005907 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-97df67fc7-qxhz9"] Feb 18 11:56:06 crc kubenswrapper[4922]: E0218 11:56:06.045155 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 11:56:06 crc kubenswrapper[4922]: E0218 11:56:06.045379 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64bh648hb7h56chd8h679hcfh54fh658h688h76hbch545h584h5ffh5d5h59fh648h6h54bh5cfhbbh568hdch55ch645h5dbh76h5b7h598h78h5f4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbxnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-569dfc4865-ndwdj_openstack(8d1daa08-43b5-47db-bd94-3efb0eb4dce2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:56:06 crc kubenswrapper[4922]: E0218 11:56:06.048027 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-569dfc4865-ndwdj" podUID="8d1daa08-43b5-47db-bd94-3efb0eb4dce2" Feb 18 11:56:06 crc kubenswrapper[4922]: E0218 11:56:06.590754 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 18 11:56:06 crc kubenswrapper[4922]: E0218 11:56:06.590919 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n556hfbh5c8h658h586hdfh5c5h6dh64dh9chfchd6h8ch548h579h8chb5h95h74h64ch65bh5c8hch65fh66ch644h589h5b8h567h579h5c7h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmwrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a3873c9e-308f-46ea-ac8f-4ee78ca92235): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.619032 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mqkkx" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.792647 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mqkkx"] Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.801507 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mqkkx"] Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.902997 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8pvj2"] Feb 18 11:56:06 crc kubenswrapper[4922]: E0218 11:56:06.903497 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7598fc1c-8735-4c0e-a095-f13117d3037e" containerName="keystone-bootstrap" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.903513 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7598fc1c-8735-4c0e-a095-f13117d3037e" containerName="keystone-bootstrap" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.903702 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7598fc1c-8735-4c0e-a095-f13117d3037e" containerName="keystone-bootstrap" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.904287 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.908972 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.909044 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r8rn4" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.909121 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.909621 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.909865 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.927585 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8pvj2"] Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.933485 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-combined-ca-bundle\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.933593 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcv5t\" (UniqueName: \"kubernetes.io/projected/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-kube-api-access-bcv5t\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.933626 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-config-data\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.933674 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-fernet-keys\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.933731 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-credential-keys\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.933808 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-scripts\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.988652 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b58cdc-a36d-4a63-b86d-474dac5d4566" path="/var/lib/kubelet/pods/20b58cdc-a36d-4a63-b86d-474dac5d4566/volumes" Feb 18 11:56:06 crc kubenswrapper[4922]: I0218 11:56:06.989033 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7598fc1c-8735-4c0e-a095-f13117d3037e" path="/var/lib/kubelet/pods/7598fc1c-8735-4c0e-a095-f13117d3037e/volumes" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.038150 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-fernet-keys\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.038299 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-credential-keys\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.038343 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-scripts\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.038457 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-combined-ca-bundle\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.038698 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcv5t\" (UniqueName: \"kubernetes.io/projected/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-kube-api-access-bcv5t\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.038737 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-config-data\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.047477 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-credential-keys\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.047782 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-config-data\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.048807 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-fernet-keys\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.051142 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-combined-ca-bundle\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.052843 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-scripts\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.066688 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcv5t\" (UniqueName: \"kubernetes.io/projected/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-kube-api-access-bcv5t\") pod \"keystone-bootstrap-8pvj2\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:07 crc kubenswrapper[4922]: I0218 11:56:07.236955 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:08 crc kubenswrapper[4922]: E0218 11:56:08.352905 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 11:56:08 crc kubenswrapper[4922]: E0218 11:56:08.353666 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4h8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rvpx7_openstack(d7852f85-b8c5-458e-901c-3659c5ed2713): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:56:08 crc kubenswrapper[4922]: E0218 11:56:08.354842 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rvpx7" podUID="d7852f85-b8c5-458e-901c-3659c5ed2713" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.390189 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.453340 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.471914 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-custom-prometheus-ca\") pod \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.471993 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-logs\") pod \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.472025 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6bkw\" (UniqueName: \"kubernetes.io/projected/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-kube-api-access-k6bkw\") pod \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.472043 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-config-data\") pod \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.472060 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-horizon-secret-key\") pod \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.472165 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-combined-ca-bundle\") pod \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.472233 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-scripts\") pod \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.472261 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-logs\") pod \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\" (UID: \"c7e64f4b-f6d6-43b3-aeab-47a0da094a84\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.472287 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-config-data\") pod \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.472378 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbxnl\" (UniqueName: \"kubernetes.io/projected/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-kube-api-access-lbxnl\") pod \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\" (UID: \"8d1daa08-43b5-47db-bd94-3efb0eb4dce2\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.478107 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-kube-api-access-lbxnl" (OuterVolumeSpecName: "kube-api-access-lbxnl") pod "8d1daa08-43b5-47db-bd94-3efb0eb4dce2" (UID: "8d1daa08-43b5-47db-bd94-3efb0eb4dce2"). InnerVolumeSpecName "kube-api-access-lbxnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.478675 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-logs" (OuterVolumeSpecName: "logs") pod "8d1daa08-43b5-47db-bd94-3efb0eb4dce2" (UID: "8d1daa08-43b5-47db-bd94-3efb0eb4dce2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.478758 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-logs" (OuterVolumeSpecName: "logs") pod "c7e64f4b-f6d6-43b3-aeab-47a0da094a84" (UID: "c7e64f4b-f6d6-43b3-aeab-47a0da094a84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.479230 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-scripts" (OuterVolumeSpecName: "scripts") pod "8d1daa08-43b5-47db-bd94-3efb0eb4dce2" (UID: "8d1daa08-43b5-47db-bd94-3efb0eb4dce2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.480149 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-config-data" (OuterVolumeSpecName: "config-data") pod "8d1daa08-43b5-47db-bd94-3efb0eb4dce2" (UID: "8d1daa08-43b5-47db-bd94-3efb0eb4dce2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.481475 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8d1daa08-43b5-47db-bd94-3efb0eb4dce2" (UID: "8d1daa08-43b5-47db-bd94-3efb0eb4dce2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.481969 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-kube-api-access-k6bkw" (OuterVolumeSpecName: "kube-api-access-k6bkw") pod "c7e64f4b-f6d6-43b3-aeab-47a0da094a84" (UID: "c7e64f4b-f6d6-43b3-aeab-47a0da094a84"). InnerVolumeSpecName "kube-api-access-k6bkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.486039 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.487858 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8pvj2"] Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.530109 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9d79df67b-mg9kq"] Feb 18 11:56:08 crc kubenswrapper[4922]: W0218 11:56:08.540567 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3b4cee1_5234_4b6c_93fa_3cb5687ecba9.slice/crio-daa7d2ea30da11dcfd9f4970f896f52151306629ac261d217fb3c7d7a4b261fd WatchSource:0}: Error finding container daa7d2ea30da11dcfd9f4970f896f52151306629ac261d217fb3c7d7a4b261fd: Status 404 returned error can't find the container with id daa7d2ea30da11dcfd9f4970f896f52151306629ac261d217fb3c7d7a4b261fd Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.569244 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bbf5454f6-d5958"] Feb 18 11:56:08 crc kubenswrapper[4922]: W0218 11:56:08.570314 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bc8759d_86ff_415d_936a_064ef742f0d9.slice/crio-bcfc19d1c521778a93513ec653e31694999fc6019dbd35eb5f3614b0b07075a0 WatchSource:0}: Error finding container bcfc19d1c521778a93513ec653e31694999fc6019dbd35eb5f3614b0b07075a0: Status 404 returned error can't find the container with id bcfc19d1c521778a93513ec653e31694999fc6019dbd35eb5f3614b0b07075a0 Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573204 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-scripts\") pod \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573257 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-config-data\") pod \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573274 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92ff9b70-4f7e-43b8-b270-3470a18fcbda-horizon-secret-key\") pod \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573323 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92ff9b70-4f7e-43b8-b270-3470a18fcbda-logs\") pod \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573383 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmpjz\" (UniqueName: \"kubernetes.io/projected/92ff9b70-4f7e-43b8-b270-3470a18fcbda-kube-api-access-gmpjz\") pod \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\" (UID: \"92ff9b70-4f7e-43b8-b270-3470a18fcbda\") " Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573706 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573720 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6bkw\" (UniqueName: \"kubernetes.io/projected/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-kube-api-access-k6bkw\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573730 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573739 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573746 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573755 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.573762 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbxnl\" (UniqueName: \"kubernetes.io/projected/8d1daa08-43b5-47db-bd94-3efb0eb4dce2-kube-api-access-lbxnl\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.574576 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92ff9b70-4f7e-43b8-b270-3470a18fcbda-logs" (OuterVolumeSpecName: "logs") pod "92ff9b70-4f7e-43b8-b270-3470a18fcbda" (UID: "92ff9b70-4f7e-43b8-b270-3470a18fcbda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.574640 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-scripts" (OuterVolumeSpecName: "scripts") pod "92ff9b70-4f7e-43b8-b270-3470a18fcbda" (UID: "92ff9b70-4f7e-43b8-b270-3470a18fcbda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.575037 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-config-data" (OuterVolumeSpecName: "config-data") pod "92ff9b70-4f7e-43b8-b270-3470a18fcbda" (UID: "92ff9b70-4f7e-43b8-b270-3470a18fcbda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.584205 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ff9b70-4f7e-43b8-b270-3470a18fcbda-kube-api-access-gmpjz" (OuterVolumeSpecName: "kube-api-access-gmpjz") pod "92ff9b70-4f7e-43b8-b270-3470a18fcbda" (UID: "92ff9b70-4f7e-43b8-b270-3470a18fcbda"). InnerVolumeSpecName "kube-api-access-gmpjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.584786 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92ff9b70-4f7e-43b8-b270-3470a18fcbda-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "92ff9b70-4f7e-43b8-b270-3470a18fcbda" (UID: "92ff9b70-4f7e-43b8-b270-3470a18fcbda"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.616670 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c7e64f4b-f6d6-43b3-aeab-47a0da094a84" (UID: "c7e64f4b-f6d6-43b3-aeab-47a0da094a84"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.633893 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e64f4b-f6d6-43b3-aeab-47a0da094a84" (UID: "c7e64f4b-f6d6-43b3-aeab-47a0da094a84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.635518 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d79df67b-mg9kq" event={"ID":"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9","Type":"ContainerStarted","Data":"daa7d2ea30da11dcfd9f4970f896f52151306629ac261d217fb3c7d7a4b261fd"} Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.636662 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bbf5454f6-d5958" event={"ID":"3bc8759d-86ff-415d-936a-064ef742f0d9","Type":"ContainerStarted","Data":"bcfc19d1c521778a93513ec653e31694999fc6019dbd35eb5f3614b0b07075a0"} Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.637525 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-569dfc4865-ndwdj" event={"ID":"8d1daa08-43b5-47db-bd94-3efb0eb4dce2","Type":"ContainerDied","Data":"de849e1d2ee9efed619f5a1fae0183071acffda1776ddbf93a3e733f6554e51b"} Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.637554 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-569dfc4865-ndwdj" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.638320 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b45644857-ghjwx" event={"ID":"92ff9b70-4f7e-43b8-b270-3470a18fcbda","Type":"ContainerDied","Data":"bea8a00a01eb30a986c3d9e163f1f6f16bd92caadc91d9f743ebfba2b622cd5d"} Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.638424 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b45644857-ghjwx" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.646814 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8pvj2" event={"ID":"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0","Type":"ContainerStarted","Data":"00caa03b7a061a2933729fdd382be3e5af6da2b0ea0a8fe37af7145a15ee06a2"} Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.647126 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-config-data" (OuterVolumeSpecName: "config-data") pod "c7e64f4b-f6d6-43b3-aeab-47a0da094a84" (UID: "c7e64f4b-f6d6-43b3-aeab-47a0da094a84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.650312 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.650645 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"c7e64f4b-f6d6-43b3-aeab-47a0da094a84","Type":"ContainerDied","Data":"5443076c45ea0609742b231b2d38cf6b7ce01d78bf985c32d5e9cd70f2e17de2"} Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.650686 4922 scope.go:117] "RemoveContainer" containerID="580ec3bf82a2cc3d74061b0ae6a5ff2d1e8371f633928e9b01fc3ab3ece19784" Feb 18 11:56:08 crc kubenswrapper[4922]: E0218 11:56:08.652642 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-rvpx7" podUID="d7852f85-b8c5-458e-901c-3659c5ed2713" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.675631 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92ff9b70-4f7e-43b8-b270-3470a18fcbda-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.675713 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmpjz\" (UniqueName: \"kubernetes.io/projected/92ff9b70-4f7e-43b8-b270-3470a18fcbda-kube-api-access-gmpjz\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.675959 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.676001 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.676017 4922 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.676028 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92ff9b70-4f7e-43b8-b270-3470a18fcbda-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.676039 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/92ff9b70-4f7e-43b8-b270-3470a18fcbda-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.676050 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e64f4b-f6d6-43b3-aeab-47a0da094a84-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.697300 4922 scope.go:117] "RemoveContainer" containerID="45913156357b5b17fd4ecdb6fa34c39561bf28bcfc8935781691cdc1a3026c9b" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.710081 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b45644857-ghjwx"] Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.721451 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b45644857-ghjwx"] Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.761028 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-569dfc4865-ndwdj"] Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.776680 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-569dfc4865-ndwdj"] Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.782754 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.792639 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.799718 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:08 crc kubenswrapper[4922]: E0218 11:56:08.800132 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.800150 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" Feb 18 11:56:08 crc kubenswrapper[4922]: E0218 11:56:08.800178 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api-log" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.800185 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api-log" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.800353 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api-log" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.800434 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.801401 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.804165 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.810164 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.879105 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-logs\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.879178 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.879246 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.879318 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-config-data\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.879476 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhfnt\" (UniqueName: \"kubernetes.io/projected/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-kube-api-access-vhfnt\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.997277 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhfnt\" (UniqueName: \"kubernetes.io/projected/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-kube-api-access-vhfnt\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.997390 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-logs\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.997551 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.997613 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:08 crc kubenswrapper[4922]: I0218 11:56:08.997693 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-config-data\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.000291 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-logs\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.003304 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-config-data\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.003624 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.004815 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.015937 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1daa08-43b5-47db-bd94-3efb0eb4dce2" path="/var/lib/kubelet/pods/8d1daa08-43b5-47db-bd94-3efb0eb4dce2/volumes" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.016681 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ff9b70-4f7e-43b8-b270-3470a18fcbda" path="/var/lib/kubelet/pods/92ff9b70-4f7e-43b8-b270-3470a18fcbda/volumes" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.017866 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhfnt\" (UniqueName: \"kubernetes.io/projected/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-kube-api-access-vhfnt\") pod \"watcher-api-0\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " pod="openstack/watcher-api-0" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.026763 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" path="/var/lib/kubelet/pods/c7e64f4b-f6d6-43b3-aeab-47a0da094a84/volumes" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.126544 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.574602 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.662963 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" event={"ID":"24bbb94b-821e-4c8c-ae27-356f296903bf","Type":"ContainerStarted","Data":"4fbf873b475eb4db471744da3b986195e954cf285ed89e854f16fe22b1d17ca4"} Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.663303 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.667078 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9e9fc515-5e15-41fc-8e76-b9f3af099a0f","Type":"ContainerStarted","Data":"538ff05accf8e69796a85607f99df1d694327aae777210403771939c222ef98d"} Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.669706 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8pvj2" event={"ID":"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0","Type":"ContainerStarted","Data":"a94ac8d35c2954541cdb5fa078a3d4a480cc9002ceb6fe5897faee87fe9e9f1f"} Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.673578 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad","Type":"ContainerStarted","Data":"17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003"} Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.677723 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-clz29" event={"ID":"71318c6d-61ee-4fb4-8682-7cf3fc0ae044","Type":"ContainerStarted","Data":"5597fcea1ce088dbc75d5c2c33874c67f09dcb474426ad6372bb91fff5a29ef3"} Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.700204 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" podStartSLOduration=50.70018397 podStartE2EDuration="50.70018397s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:09.693621644 +0000 UTC m=+1171.421325724" watchObservedRunningTime="2026-02-18 11:56:09.70018397 +0000 UTC m=+1171.427888050" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.733826 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=9.042885889 podStartE2EDuration="50.73380616s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="2026-02-18 11:55:23.767857836 +0000 UTC m=+1125.495561906" lastFinishedPulling="2026-02-18 11:56:05.458778097 +0000 UTC m=+1167.186482177" observedRunningTime="2026-02-18 11:56:09.711511267 +0000 UTC m=+1171.439215347" watchObservedRunningTime="2026-02-18 11:56:09.73380616 +0000 UTC m=+1171.461510240" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.736754 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-clz29" podStartSLOduration=6.241876883 podStartE2EDuration="50.736743415s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="2026-02-18 11:55:23.562051022 +0000 UTC m=+1125.289755102" lastFinishedPulling="2026-02-18 11:56:08.056917554 +0000 UTC m=+1169.784621634" observedRunningTime="2026-02-18 11:56:09.730054295 +0000 UTC m=+1171.457758365" watchObservedRunningTime="2026-02-18 11:56:09.736743415 +0000 UTC m=+1171.464447505" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.753558 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=13.550213046 podStartE2EDuration="50.753540099s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="2026-02-18 11:55:23.219562284 +0000 UTC m=+1124.947266364" lastFinishedPulling="2026-02-18 11:56:00.422889337 +0000 UTC m=+1162.150593417" observedRunningTime="2026-02-18 11:56:09.749556749 +0000 UTC m=+1171.477260839" watchObservedRunningTime="2026-02-18 11:56:09.753540099 +0000 UTC m=+1171.481244169" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.770418 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8pvj2" podStartSLOduration=3.770386755 podStartE2EDuration="3.770386755s" podCreationTimestamp="2026-02-18 11:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:09.766045195 +0000 UTC m=+1171.493749275" watchObservedRunningTime="2026-02-18 11:56:09.770386755 +0000 UTC m=+1171.498090835" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.891966 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:09 crc kubenswrapper[4922]: I0218 11:56:09.921137 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.009196 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="c7e64f4b-f6d6-43b3-aeab-47a0da094a84" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.149:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.131575 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.131622 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.158998 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.687830 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.719568 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.725192 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.770132 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:56:10 crc kubenswrapper[4922]: I0218 11:56:10.790276 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:56:11 crc kubenswrapper[4922]: W0218 11:56:11.586629 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade99914_d8c2_4a1a_9492_d3bb2a83a64d.slice/crio-fd553f3191fff116bcb61d1761b6014e37c31888c5dc3e6ec76e921294d669b1 WatchSource:0}: Error finding container fd553f3191fff116bcb61d1761b6014e37c31888c5dc3e6ec76e921294d669b1: Status 404 returned error can't find the container with id fd553f3191fff116bcb61d1761b6014e37c31888c5dc3e6ec76e921294d669b1 Feb 18 11:56:11 crc kubenswrapper[4922]: I0218 11:56:11.698578 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ade99914-d8c2-4a1a-9492-d3bb2a83a64d","Type":"ContainerStarted","Data":"fd553f3191fff116bcb61d1761b6014e37c31888c5dc3e6ec76e921294d669b1"} Feb 18 11:56:12 crc kubenswrapper[4922]: I0218 11:56:12.707870 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ade99914-d8c2-4a1a-9492-d3bb2a83a64d","Type":"ContainerStarted","Data":"f656a5eab3bc7d824593462123204765b1a3f4dc494bd8b0ed339492dd2506ae"} Feb 18 11:56:12 crc kubenswrapper[4922]: I0218 11:56:12.712040 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d79df67b-mg9kq" event={"ID":"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9","Type":"ContainerStarted","Data":"666106d3ce79297e2d8ed502af3f7dcfa2bfde51946ace3e35cdb24c5fc323a9"} Feb 18 11:56:12 crc kubenswrapper[4922]: I0218 11:56:12.714019 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" containerID="cri-o://17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" gracePeriod=30 Feb 18 11:56:12 crc kubenswrapper[4922]: I0218 11:56:12.714345 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bbf5454f6-d5958" event={"ID":"3bc8759d-86ff-415d-936a-064ef742f0d9","Type":"ContainerStarted","Data":"51e2a8b5f6d89d25573eab92eb04e45aafe7a4105df852b660f1bd03727a3929"} Feb 18 11:56:12 crc kubenswrapper[4922]: I0218 11:56:12.714479 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="9e9fc515-5e15-41fc-8e76-b9f3af099a0f" containerName="watcher-decision-engine" containerID="cri-o://538ff05accf8e69796a85607f99df1d694327aae777210403771939c222ef98d" gracePeriod=30 Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.726293 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3873c9e-308f-46ea-ac8f-4ee78ca92235","Type":"ContainerStarted","Data":"a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a"} Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.728851 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ade99914-d8c2-4a1a-9492-d3bb2a83a64d","Type":"ContainerStarted","Data":"0b0005d984f6a47d8e4c600b875d8f557a7684614b4f1a8f4fe087932cc0bade"} Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.729214 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.730904 4922 generic.go:334] "Generic (PLEG): container finished" podID="53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" containerID="a94ac8d35c2954541cdb5fa078a3d4a480cc9002ceb6fe5897faee87fe9e9f1f" exitCode=0 Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.730997 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8pvj2" event={"ID":"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0","Type":"ContainerDied","Data":"a94ac8d35c2954541cdb5fa078a3d4a480cc9002ceb6fe5897faee87fe9e9f1f"} Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.740429 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d79df67b-mg9kq" event={"ID":"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9","Type":"ContainerStarted","Data":"5d696a6b456c46bcc03a5a4e42955c244ebb50a3f0ae3ca3ca3bec18dc6258ea"} Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.744300 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bbf5454f6-d5958" event={"ID":"3bc8759d-86ff-415d-936a-064ef742f0d9","Type":"ContainerStarted","Data":"16cae7749b19b6f7611278b6d47dec0ae3d07e81721fc5c98ebe25f67bfc34ab"} Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.754619 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.7545976450000005 podStartE2EDuration="5.754597645s" podCreationTimestamp="2026-02-18 11:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:13.752464581 +0000 UTC m=+1175.480168681" watchObservedRunningTime="2026-02-18 11:56:13.754597645 +0000 UTC m=+1175.482301725" Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.780196 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bbf5454f6-d5958" podStartSLOduration=42.155892871 podStartE2EDuration="45.780174061s" podCreationTimestamp="2026-02-18 11:55:28 +0000 UTC" firstStartedPulling="2026-02-18 11:56:08.572237353 +0000 UTC m=+1170.299941433" lastFinishedPulling="2026-02-18 11:56:12.196518533 +0000 UTC m=+1173.924222623" observedRunningTime="2026-02-18 11:56:13.77061986 +0000 UTC m=+1175.498323940" watchObservedRunningTime="2026-02-18 11:56:13.780174061 +0000 UTC m=+1175.507878141" Feb 18 11:56:13 crc kubenswrapper[4922]: I0218 11:56:13.827991 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9d79df67b-mg9kq" podStartSLOduration=42.19425674 podStartE2EDuration="45.82796957s" podCreationTimestamp="2026-02-18 11:55:28 +0000 UTC" firstStartedPulling="2026-02-18 11:56:08.552494523 +0000 UTC m=+1170.280198603" lastFinishedPulling="2026-02-18 11:56:12.186207343 +0000 UTC m=+1173.913911433" observedRunningTime="2026-02-18 11:56:13.821170568 +0000 UTC m=+1175.548874648" watchObservedRunningTime="2026-02-18 11:56:13.82796957 +0000 UTC m=+1175.555673650" Feb 18 11:56:14 crc kubenswrapper[4922]: I0218 11:56:14.127819 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.125785 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:15 crc kubenswrapper[4922]: E0218 11:56:15.137654 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:15 crc kubenswrapper[4922]: E0218 11:56:15.139459 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:15 crc kubenswrapper[4922]: E0218 11:56:15.145826 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:15 crc kubenswrapper[4922]: E0218 11:56:15.145905 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.226794 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-scripts\") pod \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.226890 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-credential-keys\") pod \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.227040 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-config-data\") pod \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.227109 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcv5t\" (UniqueName: \"kubernetes.io/projected/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-kube-api-access-bcv5t\") pod \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.227154 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-fernet-keys\") pod \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.227197 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-combined-ca-bundle\") pod \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\" (UID: \"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0\") " Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.236613 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" (UID: "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.236711 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-kube-api-access-bcv5t" (OuterVolumeSpecName: "kube-api-access-bcv5t") pod "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" (UID: "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0"). InnerVolumeSpecName "kube-api-access-bcv5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.250611 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" (UID: "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.251347 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-scripts" (OuterVolumeSpecName: "scripts") pod "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" (UID: "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.255078 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" (UID: "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.259190 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-config-data" (OuterVolumeSpecName: "config-data") pod "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" (UID: "53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.330158 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.330217 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.330231 4922 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.330244 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.330258 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcv5t\" (UniqueName: \"kubernetes.io/projected/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-kube-api-access-bcv5t\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.330272 4922 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.690555 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.787878 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.788437 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8pvj2" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.788498 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8pvj2" event={"ID":"53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0","Type":"ContainerDied","Data":"00caa03b7a061a2933729fdd382be3e5af6da2b0ea0a8fe37af7145a15ee06a2"} Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.788547 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00caa03b7a061a2933729fdd382be3e5af6da2b0ea0a8fe37af7145a15ee06a2" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.789246 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zmcwq"] Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.789482 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" podUID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerName="dnsmasq-dns" containerID="cri-o://9c3a3f840bf392e642ec5f6d88074e14f3634fce3f5fe950f1ca6ef37f5d1bd5" gracePeriod=10 Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.919138 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b854f8786-pls2t"] Feb 18 11:56:15 crc kubenswrapper[4922]: E0218 11:56:15.919838 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" containerName="keystone-bootstrap" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.919853 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" containerName="keystone-bootstrap" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.920101 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" containerName="keystone-bootstrap" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.920847 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.926902 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.927350 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.927607 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.927732 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.927774 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r8rn4" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.939612 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.944661 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2jn2\" (UniqueName: \"kubernetes.io/projected/2efd0609-4858-47ce-8213-6a74510e8acf-kube-api-access-z2jn2\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.944755 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-fernet-keys\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.944790 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-combined-ca-bundle\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.944885 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-internal-tls-certs\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.944929 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-config-data\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.944961 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-scripts\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.944985 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-public-tls-certs\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.945012 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-credential-keys\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:15 crc kubenswrapper[4922]: I0218 11:56:15.966826 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b854f8786-pls2t"] Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.047578 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-internal-tls-certs\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.047727 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-config-data\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.047758 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-scripts\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.047775 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-public-tls-certs\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.047802 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-credential-keys\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.049521 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2jn2\" (UniqueName: \"kubernetes.io/projected/2efd0609-4858-47ce-8213-6a74510e8acf-kube-api-access-z2jn2\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.049566 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-fernet-keys\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.049614 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-combined-ca-bundle\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.053576 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-config-data\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.054978 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-credential-keys\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.055712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-public-tls-certs\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.058943 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-internal-tls-certs\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.060064 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-scripts\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.060515 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-combined-ca-bundle\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.071223 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2efd0609-4858-47ce-8213-6a74510e8acf-fernet-keys\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.076715 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2jn2\" (UniqueName: \"kubernetes.io/projected/2efd0609-4858-47ce-8213-6a74510e8acf-kube-api-access-z2jn2\") pod \"keystone-b854f8786-pls2t\" (UID: \"2efd0609-4858-47ce-8213-6a74510e8acf\") " pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.255467 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.299581 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.809637 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerID="9c3a3f840bf392e642ec5f6d88074e14f3634fce3f5fe950f1ca6ef37f5d1bd5" exitCode=0 Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.811215 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" event={"ID":"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0","Type":"ContainerDied","Data":"9c3a3f840bf392e642ec5f6d88074e14f3634fce3f5fe950f1ca6ef37f5d1bd5"} Feb 18 11:56:16 crc kubenswrapper[4922]: I0218 11:56:16.814021 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b854f8786-pls2t"] Feb 18 11:56:18 crc kubenswrapper[4922]: I0218 11:56:18.388921 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:56:18 crc kubenswrapper[4922]: I0218 11:56:18.389880 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:56:18 crc kubenswrapper[4922]: I0218 11:56:18.523290 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:56:18 crc kubenswrapper[4922]: I0218 11:56:18.523418 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:56:19 crc kubenswrapper[4922]: I0218 11:56:19.126995 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 11:56:19 crc kubenswrapper[4922]: I0218 11:56:19.131330 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 11:56:19 crc kubenswrapper[4922]: I0218 11:56:19.848470 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 11:56:19 crc kubenswrapper[4922]: E0218 11:56:19.895647 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="538ff05accf8e69796a85607f99df1d694327aae777210403771939c222ef98d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 18 11:56:19 crc kubenswrapper[4922]: E0218 11:56:19.898924 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="538ff05accf8e69796a85607f99df1d694327aae777210403771939c222ef98d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 18 11:56:19 crc kubenswrapper[4922]: E0218 11:56:19.900599 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="538ff05accf8e69796a85607f99df1d694327aae777210403771939c222ef98d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 18 11:56:19 crc kubenswrapper[4922]: E0218 11:56:19.900641 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="9e9fc515-5e15-41fc-8e76-b9f3af099a0f" containerName="watcher-decision-engine" Feb 18 11:56:20 crc kubenswrapper[4922]: E0218 11:56:20.133701 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:20 crc kubenswrapper[4922]: E0218 11:56:20.135382 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:20 crc kubenswrapper[4922]: E0218 11:56:20.139704 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:20 crc kubenswrapper[4922]: E0218 11:56:20.139769 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:22 crc kubenswrapper[4922]: W0218 11:56:22.390467 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2efd0609_4858_47ce_8213_6a74510e8acf.slice/crio-3d68965d6f82575327207faca5029e68947c2c261cc7f048c8228ca314476d56 WatchSource:0}: Error finding container 3d68965d6f82575327207faca5029e68947c2c261cc7f048c8228ca314476d56: Status 404 returned error can't find the container with id 3d68965d6f82575327207faca5029e68947c2c261cc7f048c8228ca314476d56 Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.473221 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.617980 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7r9n\" (UniqueName: \"kubernetes.io/projected/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-kube-api-access-f7r9n\") pod \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.618072 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-config\") pod \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.618240 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-dns-svc\") pod \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.618271 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-nb\") pod \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.618300 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-sb\") pod \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\" (UID: \"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0\") " Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.624394 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-kube-api-access-f7r9n" (OuterVolumeSpecName: "kube-api-access-f7r9n") pod "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" (UID: "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0"). InnerVolumeSpecName "kube-api-access-f7r9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.679662 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" (UID: "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.684764 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" (UID: "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.685989 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-config" (OuterVolumeSpecName: "config") pod "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" (UID: "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.696188 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" (UID: "ec6256f2-24ef-4e83-9a3c-c98ef206f7b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.720283 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.720314 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.720328 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.720337 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7r9n\" (UniqueName: \"kubernetes.io/projected/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-kube-api-access-f7r9n\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.720346 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.882148 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b854f8786-pls2t" event={"ID":"2efd0609-4858-47ce-8213-6a74510e8acf","Type":"ContainerStarted","Data":"3d68965d6f82575327207faca5029e68947c2c261cc7f048c8228ca314476d56"} Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.885324 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" event={"ID":"ec6256f2-24ef-4e83-9a3c-c98ef206f7b0","Type":"ContainerDied","Data":"dd9d9badec862e12fd53c23629f63f90af2fbdcdb63f9f7d603ffed75ff6a6ad"} Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.885438 4922 scope.go:117] "RemoveContainer" containerID="9c3a3f840bf392e642ec5f6d88074e14f3634fce3f5fe950f1ca6ef37f5d1bd5" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.910819 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.946027 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zmcwq"] Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.955503 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-zmcwq"] Feb 18 11:56:22 crc kubenswrapper[4922]: I0218 11:56:22.989532 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" path="/var/lib/kubelet/pods/ec6256f2-24ef-4e83-9a3c-c98ef206f7b0/volumes" Feb 18 11:56:24 crc kubenswrapper[4922]: I0218 11:56:24.200427 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:24 crc kubenswrapper[4922]: I0218 11:56:24.200704 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api-log" containerID="cri-o://f656a5eab3bc7d824593462123204765b1a3f4dc494bd8b0ed339492dd2506ae" gracePeriod=30 Feb 18 11:56:24 crc kubenswrapper[4922]: I0218 11:56:24.200911 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api" containerID="cri-o://0b0005d984f6a47d8e4c600b875d8f557a7684614b4f1a8f4fe087932cc0bade" gracePeriod=30 Feb 18 11:56:24 crc kubenswrapper[4922]: I0218 11:56:24.812737 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-zmcwq" podUID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Feb 18 11:56:25 crc kubenswrapper[4922]: E0218 11:56:25.133625 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:25 crc kubenswrapper[4922]: E0218 11:56:25.135575 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:25 crc kubenswrapper[4922]: E0218 11:56:25.137481 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:25 crc kubenswrapper[4922]: E0218 11:56:25.137522 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:25 crc kubenswrapper[4922]: I0218 11:56:25.922786 4922 generic.go:334] "Generic (PLEG): container finished" podID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerID="f656a5eab3bc7d824593462123204765b1a3f4dc494bd8b0ed339492dd2506ae" exitCode=143 Feb 18 11:56:25 crc kubenswrapper[4922]: I0218 11:56:25.922867 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ade99914-d8c2-4a1a-9492-d3bb2a83a64d","Type":"ContainerDied","Data":"f656a5eab3bc7d824593462123204765b1a3f4dc494bd8b0ed339492dd2506ae"} Feb 18 11:56:27 crc kubenswrapper[4922]: I0218 11:56:27.969549 4922 generic.go:334] "Generic (PLEG): container finished" podID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerID="0b0005d984f6a47d8e4c600b875d8f557a7684614b4f1a8f4fe087932cc0bade" exitCode=0 Feb 18 11:56:27 crc kubenswrapper[4922]: I0218 11:56:27.969578 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ade99914-d8c2-4a1a-9492-d3bb2a83a64d","Type":"ContainerDied","Data":"0b0005d984f6a47d8e4c600b875d8f557a7684614b4f1a8f4fe087932cc0bade"} Feb 18 11:56:28 crc kubenswrapper[4922]: I0218 11:56:28.389926 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-9d79df67b-mg9kq" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Feb 18 11:56:28 crc kubenswrapper[4922]: I0218 11:56:28.525993 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bbf5454f6-d5958" podUID="3bc8759d-86ff-415d-936a-064ef742f0d9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Feb 18 11:56:29 crc kubenswrapper[4922]: I0218 11:56:29.127313 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 18 11:56:29 crc kubenswrapper[4922]: I0218 11:56:29.127333 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Feb 18 11:56:30 crc kubenswrapper[4922]: E0218 11:56:30.167976 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:30 crc kubenswrapper[4922]: E0218 11:56:30.172141 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:30 crc kubenswrapper[4922]: E0218 11:56:30.173567 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:30 crc kubenswrapper[4922]: E0218 11:56:30.173628 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:35 crc kubenswrapper[4922]: E0218 11:56:35.133201 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:35 crc kubenswrapper[4922]: E0218 11:56:35.135406 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:35 crc kubenswrapper[4922]: E0218 11:56:35.136331 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:35 crc kubenswrapper[4922]: E0218 11:56:35.136387 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:36 crc kubenswrapper[4922]: E0218 11:56:36.239873 4922 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Feb 18 11:56:36 crc kubenswrapper[4922]: E0218 11:56:36.240020 4922 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmwrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a3873c9e-308f-46ea-ac8f-4ee78ca92235): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 11:56:36 crc kubenswrapper[4922]: I0218 11:56:36.359466 4922 scope.go:117] "RemoveContainer" containerID="95d55b3314d502d1433cbc46f43ce2797b326e2911e0feadc7c77b360ebeb491" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.811287 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.839468 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-logs\") pod \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.839541 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-combined-ca-bundle\") pod \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.839611 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-custom-prometheus-ca\") pod \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.839631 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhfnt\" (UniqueName: \"kubernetes.io/projected/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-kube-api-access-vhfnt\") pod \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.839653 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-config-data\") pod \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\" (UID: \"ade99914-d8c2-4a1a-9492-d3bb2a83a64d\") " Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.848700 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-logs" (OuterVolumeSpecName: "logs") pod "ade99914-d8c2-4a1a-9492-d3bb2a83a64d" (UID: "ade99914-d8c2-4a1a-9492-d3bb2a83a64d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.871554 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-kube-api-access-vhfnt" (OuterVolumeSpecName: "kube-api-access-vhfnt") pod "ade99914-d8c2-4a1a-9492-d3bb2a83a64d" (UID: "ade99914-d8c2-4a1a-9492-d3bb2a83a64d"). InnerVolumeSpecName "kube-api-access-vhfnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.909370 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ade99914-d8c2-4a1a-9492-d3bb2a83a64d" (UID: "ade99914-d8c2-4a1a-9492-d3bb2a83a64d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.929912 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ade99914-d8c2-4a1a-9492-d3bb2a83a64d" (UID: "ade99914-d8c2-4a1a-9492-d3bb2a83a64d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.951872 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.951914 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.951931 4922 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.951948 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhfnt\" (UniqueName: \"kubernetes.io/projected/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-kube-api-access-vhfnt\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:37 crc kubenswrapper[4922]: I0218 11:56:37.989439 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-config-data" (OuterVolumeSpecName: "config-data") pod "ade99914-d8c2-4a1a-9492-d3bb2a83a64d" (UID: "ade99914-d8c2-4a1a-9492-d3bb2a83a64d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.054202 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade99914-d8c2-4a1a-9492-d3bb2a83a64d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.056739 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b854f8786-pls2t" event={"ID":"2efd0609-4858-47ce-8213-6a74510e8acf","Type":"ContainerStarted","Data":"56fe05bf9b6b7132f87e22c520afda7a7ad80144754369b0d0ab69f18eb6ea5a"} Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.056901 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.061536 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mqfhx" event={"ID":"31aad152-dcb7-472f-a0f8-d90ae972442b","Type":"ContainerStarted","Data":"fdbd37640dccec05f284f7ac0ade4661f3a31188c1203c2107b809735927d31b"} Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.067765 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ade99914-d8c2-4a1a-9492-d3bb2a83a64d","Type":"ContainerDied","Data":"fd553f3191fff116bcb61d1761b6014e37c31888c5dc3e6ec76e921294d669b1"} Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.067824 4922 scope.go:117] "RemoveContainer" containerID="0b0005d984f6a47d8e4c600b875d8f557a7684614b4f1a8f4fe087932cc0bade" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.067985 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.080913 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b854f8786-pls2t" podStartSLOduration=23.080895895 podStartE2EDuration="23.080895895s" podCreationTimestamp="2026-02-18 11:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:38.073945889 +0000 UTC m=+1199.801649959" watchObservedRunningTime="2026-02-18 11:56:38.080895895 +0000 UTC m=+1199.808599975" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.101070 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-mqfhx" podStartSLOduration=5.031170574 podStartE2EDuration="1m19.101053945s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="2026-02-18 11:55:23.590736598 +0000 UTC m=+1125.318440668" lastFinishedPulling="2026-02-18 11:56:37.660619959 +0000 UTC m=+1199.388324039" observedRunningTime="2026-02-18 11:56:38.100015159 +0000 UTC m=+1199.827719289" watchObservedRunningTime="2026-02-18 11:56:38.101053945 +0000 UTC m=+1199.828758025" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.137800 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.140968 4922 scope.go:117] "RemoveContainer" containerID="f656a5eab3bc7d824593462123204765b1a3f4dc494bd8b0ed339492dd2506ae" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.156398 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.176747 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:38 crc kubenswrapper[4922]: E0218 11:56:38.177890 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api-log" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.177944 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api-log" Feb 18 11:56:38 crc kubenswrapper[4922]: E0218 11:56:38.177961 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerName="dnsmasq-dns" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.177967 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerName="dnsmasq-dns" Feb 18 11:56:38 crc kubenswrapper[4922]: E0218 11:56:38.178025 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerName="init" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.178033 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerName="init" Feb 18 11:56:38 crc kubenswrapper[4922]: E0218 11:56:38.178053 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.178059 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.178478 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.178538 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api-log" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.178554 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6256f2-24ef-4e83-9a3c-c98ef206f7b0" containerName="dnsmasq-dns" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.180136 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.180293 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.183665 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.183970 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.184260 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.360465 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-logs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.360522 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-config-data\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.360579 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.360779 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffxp\" (UniqueName: \"kubernetes.io/projected/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-kube-api-access-jffxp\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.360839 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.360879 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.361058 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-public-tls-certs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.462870 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-public-tls-certs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.462965 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-logs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.463043 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-config-data\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.463064 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.463203 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jffxp\" (UniqueName: \"kubernetes.io/projected/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-kube-api-access-jffxp\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.463241 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.463286 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.463671 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-logs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.471444 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.471966 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.474176 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-config-data\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.475085 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-public-tls-certs\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.476108 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.487141 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jffxp\" (UniqueName: \"kubernetes.io/projected/43b1edea-6c95-42ae-b30a-d3ce2eb1e0de-kube-api-access-jffxp\") pod \"watcher-api-0\" (UID: \"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de\") " pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.516769 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 11:56:38 crc kubenswrapper[4922]: I0218 11:56:38.822501 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 11:56:39 crc kubenswrapper[4922]: I0218 11:56:39.005844 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" path="/var/lib/kubelet/pods/ade99914-d8c2-4a1a-9492-d3bb2a83a64d/volumes" Feb 18 11:56:39 crc kubenswrapper[4922]: I0218 11:56:39.092416 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvpx7" event={"ID":"d7852f85-b8c5-458e-901c-3659c5ed2713","Type":"ContainerStarted","Data":"483ace8c8816a57983828e0b595d4b4e906ff56406e775bb5aac307ba89d5e30"} Feb 18 11:56:39 crc kubenswrapper[4922]: I0218 11:56:39.095223 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de","Type":"ContainerStarted","Data":"5150e1ea135ca30602971c47aa10843dc651670d87ed44c36bdcbd6e651bf96f"} Feb 18 11:56:39 crc kubenswrapper[4922]: I0218 11:56:39.095264 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de","Type":"ContainerStarted","Data":"835aa3653d2597b3da5bab3dd4e7277274d3fa27526015cb79464cf620c313bf"} Feb 18 11:56:39 crc kubenswrapper[4922]: I0218 11:56:39.127871 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:56:39 crc kubenswrapper[4922]: I0218 11:56:39.128346 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ade99914-d8c2-4a1a-9492-d3bb2a83a64d" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:56:39 crc kubenswrapper[4922]: I0218 11:56:39.807394 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:56:39 crc kubenswrapper[4922]: I0218 11:56:39.807470 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:56:40 crc kubenswrapper[4922]: I0218 11:56:40.107609 4922 generic.go:334] "Generic (PLEG): container finished" podID="71318c6d-61ee-4fb4-8682-7cf3fc0ae044" containerID="5597fcea1ce088dbc75d5c2c33874c67f09dcb474426ad6372bb91fff5a29ef3" exitCode=0 Feb 18 11:56:40 crc kubenswrapper[4922]: I0218 11:56:40.107664 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-clz29" event={"ID":"71318c6d-61ee-4fb4-8682-7cf3fc0ae044","Type":"ContainerDied","Data":"5597fcea1ce088dbc75d5c2c33874c67f09dcb474426ad6372bb91fff5a29ef3"} Feb 18 11:56:40 crc kubenswrapper[4922]: I0218 11:56:40.112396 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"43b1edea-6c95-42ae-b30a-d3ce2eb1e0de","Type":"ContainerStarted","Data":"94eb0f2dfb2c70c102724ce9c06e392c0ec15f8e812096448c4b43eb5202059c"} Feb 18 11:56:40 crc kubenswrapper[4922]: I0218 11:56:40.113428 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 11:56:40 crc kubenswrapper[4922]: I0218 11:56:40.131501 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rvpx7" podStartSLOduration=7.083547602 podStartE2EDuration="1m21.13033859s" podCreationTimestamp="2026-02-18 11:55:19 +0000 UTC" firstStartedPulling="2026-02-18 11:55:23.640709111 +0000 UTC m=+1125.368413191" lastFinishedPulling="2026-02-18 11:56:37.687500099 +0000 UTC m=+1199.415204179" observedRunningTime="2026-02-18 11:56:39.121243407 +0000 UTC m=+1200.848947497" watchObservedRunningTime="2026-02-18 11:56:40.13033859 +0000 UTC m=+1201.858042670" Feb 18 11:56:40 crc kubenswrapper[4922]: E0218 11:56:40.133254 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:40 crc kubenswrapper[4922]: E0218 11:56:40.134650 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:40 crc kubenswrapper[4922]: E0218 11:56:40.137752 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:40 crc kubenswrapper[4922]: E0218 11:56:40.137822 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:40 crc kubenswrapper[4922]: I0218 11:56:40.156780 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.156760138 podStartE2EDuration="2.156760138s" podCreationTimestamp="2026-02-18 11:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:40.149976176 +0000 UTC m=+1201.877680256" watchObservedRunningTime="2026-02-18 11:56:40.156760138 +0000 UTC m=+1201.884464218" Feb 18 11:56:40 crc kubenswrapper[4922]: I0218 11:56:40.993670 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:56:41 crc kubenswrapper[4922]: I0218 11:56:41.170416 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:56:42 crc kubenswrapper[4922]: I0218 11:56:42.129279 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:56:42 crc kubenswrapper[4922]: I0218 11:56:42.759753 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 11:56:42 crc kubenswrapper[4922]: I0218 11:56:42.863546 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.146023 4922 generic.go:334] "Generic (PLEG): container finished" podID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" exitCode=137 Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.146119 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad","Type":"ContainerDied","Data":"17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003"} Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.150592 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9e9fc515-5e15-41fc-8e76-b9f3af099a0f","Type":"ContainerDied","Data":"538ff05accf8e69796a85607f99df1d694327aae777210403771939c222ef98d"} Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.150518 4922 generic.go:334] "Generic (PLEG): container finished" podID="9e9fc515-5e15-41fc-8e76-b9f3af099a0f" containerID="538ff05accf8e69796a85607f99df1d694327aae777210403771939c222ef98d" exitCode=137 Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.238893 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7bbf5454f6-d5958" Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.347821 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9d79df67b-mg9kq"] Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.348101 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9d79df67b-mg9kq" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon-log" containerID="cri-o://666106d3ce79297e2d8ed502af3f7dcfa2bfde51946ace3e35cdb24c5fc323a9" gracePeriod=30 Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.348500 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9d79df67b-mg9kq" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" containerID="cri-o://5d696a6b456c46bcc03a5a4e42955c244ebb50a3f0ae3ca3ca3bec18dc6258ea" gracePeriod=30 Feb 18 11:56:43 crc kubenswrapper[4922]: I0218 11:56:43.517345 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.164944 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-clz29" event={"ID":"71318c6d-61ee-4fb4-8682-7cf3fc0ae044","Type":"ContainerDied","Data":"c85150d5f0f00107589b346911782e9cacbfd0c2f75cf23b1582f2bb391c607c"} Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.165208 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c85150d5f0f00107589b346911782e9cacbfd0c2f75cf23b1582f2bb391c607c" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.215183 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-clz29" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.297981 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-combined-ca-bundle\") pod \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.299010 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d8ft\" (UniqueName: \"kubernetes.io/projected/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-kube-api-access-5d8ft\") pod \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.299876 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-scripts\") pod \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.299936 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-logs\") pod \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.299959 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-config-data\") pod \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\" (UID: \"71318c6d-61ee-4fb4-8682-7cf3fc0ae044\") " Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.300248 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-logs" (OuterVolumeSpecName: "logs") pod "71318c6d-61ee-4fb4-8682-7cf3fc0ae044" (UID: "71318c6d-61ee-4fb4-8682-7cf3fc0ae044"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.300651 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.304646 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-scripts" (OuterVolumeSpecName: "scripts") pod "71318c6d-61ee-4fb4-8682-7cf3fc0ae044" (UID: "71318c6d-61ee-4fb4-8682-7cf3fc0ae044"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.304730 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-kube-api-access-5d8ft" (OuterVolumeSpecName: "kube-api-access-5d8ft") pod "71318c6d-61ee-4fb4-8682-7cf3fc0ae044" (UID: "71318c6d-61ee-4fb4-8682-7cf3fc0ae044"). InnerVolumeSpecName "kube-api-access-5d8ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.326157 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-config-data" (OuterVolumeSpecName: "config-data") pod "71318c6d-61ee-4fb4-8682-7cf3fc0ae044" (UID: "71318c6d-61ee-4fb4-8682-7cf3fc0ae044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.351295 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71318c6d-61ee-4fb4-8682-7cf3fc0ae044" (UID: "71318c6d-61ee-4fb4-8682-7cf3fc0ae044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.402856 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.402893 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d8ft\" (UniqueName: \"kubernetes.io/projected/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-kube-api-access-5d8ft\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.402910 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:44 crc kubenswrapper[4922]: I0218 11:56:44.402920 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71318c6d-61ee-4fb4-8682-7cf3fc0ae044-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: E0218 11:56:45.132276 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003 is running failed: container process not found" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:45 crc kubenswrapper[4922]: E0218 11:56:45.133071 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003 is running failed: container process not found" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:45 crc kubenswrapper[4922]: E0218 11:56:45.133844 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003 is running failed: container process not found" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 18 11:56:45 crc kubenswrapper[4922]: E0218 11:56:45.133915 4922 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.179065 4922 generic.go:334] "Generic (PLEG): container finished" podID="31aad152-dcb7-472f-a0f8-d90ae972442b" containerID="fdbd37640dccec05f284f7ac0ade4661f3a31188c1203c2107b809735927d31b" exitCode=0 Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.179156 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-clz29" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.179396 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mqfhx" event={"ID":"31aad152-dcb7-472f-a0f8-d90ae972442b","Type":"ContainerDied","Data":"fdbd37640dccec05f284f7ac0ade4661f3a31188c1203c2107b809735927d31b"} Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.342331 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7c7b84785b-f8lmj"] Feb 18 11:56:45 crc kubenswrapper[4922]: E0218 11:56:45.342836 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71318c6d-61ee-4fb4-8682-7cf3fc0ae044" containerName="placement-db-sync" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.342854 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="71318c6d-61ee-4fb4-8682-7cf3fc0ae044" containerName="placement-db-sync" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.352493 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="71318c6d-61ee-4fb4-8682-7cf3fc0ae044" containerName="placement-db-sync" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.356679 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c7b84785b-f8lmj"] Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.356798 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.361937 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pnzs4" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.362222 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.362971 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.363269 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.363430 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.432732 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-internal-tls-certs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.432815 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-config-data\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.433055 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-combined-ca-bundle\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.433099 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-public-tls-certs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.433136 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhxhk\" (UniqueName: \"kubernetes.io/projected/280ad3f5-10de-4dc8-866b-c7502c004835-kube-api-access-xhxhk\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.433155 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-scripts\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.433389 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280ad3f5-10de-4dc8-866b-c7502c004835-logs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.535070 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-combined-ca-bundle\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.535162 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-public-tls-certs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.535215 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhxhk\" (UniqueName: \"kubernetes.io/projected/280ad3f5-10de-4dc8-866b-c7502c004835-kube-api-access-xhxhk\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.535241 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-scripts\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.535287 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280ad3f5-10de-4dc8-866b-c7502c004835-logs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.535341 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-internal-tls-certs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.535428 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-config-data\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.536104 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280ad3f5-10de-4dc8-866b-c7502c004835-logs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.540905 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-internal-tls-certs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.540909 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-scripts\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.541845 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-public-tls-certs\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.542017 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-config-data\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.542160 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280ad3f5-10de-4dc8-866b-c7502c004835-combined-ca-bundle\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.557168 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhxhk\" (UniqueName: \"kubernetes.io/projected/280ad3f5-10de-4dc8-866b-c7502c004835-kube-api-access-xhxhk\") pod \"placement-7c7b84785b-f8lmj\" (UID: \"280ad3f5-10de-4dc8-866b-c7502c004835\") " pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.685561 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.737109 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.747025 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840216 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-config-data\") pod \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840331 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlfjr\" (UniqueName: \"kubernetes.io/projected/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-kube-api-access-dlfjr\") pod \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840399 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lkg5\" (UniqueName: \"kubernetes.io/projected/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-kube-api-access-6lkg5\") pod \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840469 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-custom-prometheus-ca\") pod \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840523 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-config-data\") pod \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840571 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-combined-ca-bundle\") pod \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840643 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-logs\") pod \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\" (UID: \"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840757 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-logs\") pod \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.840810 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-combined-ca-bundle\") pod \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\" (UID: \"9e9fc515-5e15-41fc-8e76-b9f3af099a0f\") " Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.841959 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-logs" (OuterVolumeSpecName: "logs") pod "0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" (UID: "0c614b6a-8d46-4a07-89f6-1a7cc64dfdad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.842286 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-logs" (OuterVolumeSpecName: "logs") pod "9e9fc515-5e15-41fc-8e76-b9f3af099a0f" (UID: "9e9fc515-5e15-41fc-8e76-b9f3af099a0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.847180 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-kube-api-access-dlfjr" (OuterVolumeSpecName: "kube-api-access-dlfjr") pod "0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" (UID: "0c614b6a-8d46-4a07-89f6-1a7cc64dfdad"). InnerVolumeSpecName "kube-api-access-dlfjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.858699 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-kube-api-access-6lkg5" (OuterVolumeSpecName: "kube-api-access-6lkg5") pod "9e9fc515-5e15-41fc-8e76-b9f3af099a0f" (UID: "9e9fc515-5e15-41fc-8e76-b9f3af099a0f"). InnerVolumeSpecName "kube-api-access-6lkg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.896634 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e9fc515-5e15-41fc-8e76-b9f3af099a0f" (UID: "9e9fc515-5e15-41fc-8e76-b9f3af099a0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.904755 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9e9fc515-5e15-41fc-8e76-b9f3af099a0f" (UID: "9e9fc515-5e15-41fc-8e76-b9f3af099a0f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.913491 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-config-data" (OuterVolumeSpecName: "config-data") pod "0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" (UID: "0c614b6a-8d46-4a07-89f6-1a7cc64dfdad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.942025 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" (UID: "0c614b6a-8d46-4a07-89f6-1a7cc64dfdad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.943207 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.943222 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.943231 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.943240 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlfjr\" (UniqueName: \"kubernetes.io/projected/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-kube-api-access-dlfjr\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.943250 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lkg5\" (UniqueName: \"kubernetes.io/projected/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-kube-api-access-6lkg5\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.943258 4922 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.943266 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.943273 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:45 crc kubenswrapper[4922]: I0218 11:56:45.958545 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-config-data" (OuterVolumeSpecName: "config-data") pod "9e9fc515-5e15-41fc-8e76-b9f3af099a0f" (UID: "9e9fc515-5e15-41fc-8e76-b9f3af099a0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:46 crc kubenswrapper[4922]: E0218 11:56:46.046835 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.049113 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9fc515-5e15-41fc-8e76-b9f3af099a0f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.190910 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9e9fc515-5e15-41fc-8e76-b9f3af099a0f","Type":"ContainerDied","Data":"089198a21158f6f6b56f6630ad31cc5105f4b0001f6d2937e2ed72533d2aaec8"} Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.190950 4922 scope.go:117] "RemoveContainer" containerID="538ff05accf8e69796a85607f99df1d694327aae777210403771939c222ef98d" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.191064 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.194652 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"0c614b6a-8d46-4a07-89f6-1a7cc64dfdad","Type":"ContainerDied","Data":"c1680a690b411a1b27ee20c720d7966628a4b267d54560c222bee5a47c521f74"} Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.194737 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.200661 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3873c9e-308f-46ea-ac8f-4ee78ca92235","Type":"ContainerStarted","Data":"1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c"} Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.200787 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerName="ceilometer-notification-agent" containerID="cri-o://a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a" gracePeriod=30 Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.201407 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerName="proxy-httpd" containerID="cri-o://1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c" gracePeriod=30 Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.236185 4922 scope.go:117] "RemoveContainer" containerID="17cce53602151ba6d1c560fb5734c8a81055d3f6f58806eded906ecc6d4b9003" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.264501 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.286117 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.292590 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.308221 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.372428 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:56:46 crc kubenswrapper[4922]: E0218 11:56:46.373344 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.373420 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:46 crc kubenswrapper[4922]: E0218 11:56:46.373431 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9fc515-5e15-41fc-8e76-b9f3af099a0f" containerName="watcher-decision-engine" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.373437 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9fc515-5e15-41fc-8e76-b9f3af099a0f" containerName="watcher-decision-engine" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.373836 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9fc515-5e15-41fc-8e76-b9f3af099a0f" containerName="watcher-decision-engine" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.373878 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" containerName="watcher-applier" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.374735 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.377325 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.388244 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.410300 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.411983 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.414922 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.420095 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466348 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466405 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zsgq\" (UniqueName: \"kubernetes.io/projected/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-kube-api-access-9zsgq\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466447 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-config-data\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466481 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-config-data\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466507 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6wh\" (UniqueName: \"kubernetes.io/projected/bdb6fddf-10f2-476b-822f-130f6fa12007-kube-api-access-ff6wh\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466526 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb6fddf-10f2-476b-822f-130f6fa12007-logs\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466571 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-logs\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466605 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.466633 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.529752 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c7b84785b-f8lmj"] Feb 18 11:56:46 crc kubenswrapper[4922]: W0218 11:56:46.532254 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod280ad3f5_10de_4dc8_866b_c7502c004835.slice/crio-18e2c3e882668ec70d6f9fd6f54e2aaecf4f14013bbcf2f79e39db1abbf09685 WatchSource:0}: Error finding container 18e2c3e882668ec70d6f9fd6f54e2aaecf4f14013bbcf2f79e39db1abbf09685: Status 404 returned error can't find the container with id 18e2c3e882668ec70d6f9fd6f54e2aaecf4f14013bbcf2f79e39db1abbf09685 Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.568052 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.568926 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.569017 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.569054 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zsgq\" (UniqueName: \"kubernetes.io/projected/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-kube-api-access-9zsgq\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.569111 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-config-data\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.569157 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-config-data\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.569196 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6wh\" (UniqueName: \"kubernetes.io/projected/bdb6fddf-10f2-476b-822f-130f6fa12007-kube-api-access-ff6wh\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.569227 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb6fddf-10f2-476b-822f-130f6fa12007-logs\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.569297 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-logs\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.569887 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-logs\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.573021 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb6fddf-10f2-476b-822f-130f6fa12007-logs\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.573877 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-config-data\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.574462 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-config-data\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.575874 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.575881 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.587167 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6wh\" (UniqueName: \"kubernetes.io/projected/bdb6fddf-10f2-476b-822f-130f6fa12007-kube-api-access-ff6wh\") pod \"watcher-decision-engine-0\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.589379 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zsgq\" (UniqueName: \"kubernetes.io/projected/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-kube-api-access-9zsgq\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.589980 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd84d8c9-0a98-4f6b-b6da-887f4d294a38-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"cd84d8c9-0a98-4f6b-b6da-887f4d294a38\") " pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.699167 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.737923 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.812758 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.878932 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-db-sync-config-data\") pod \"31aad152-dcb7-472f-a0f8-d90ae972442b\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.879561 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-combined-ca-bundle\") pod \"31aad152-dcb7-472f-a0f8-d90ae972442b\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.879627 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-725mj\" (UniqueName: \"kubernetes.io/projected/31aad152-dcb7-472f-a0f8-d90ae972442b-kube-api-access-725mj\") pod \"31aad152-dcb7-472f-a0f8-d90ae972442b\" (UID: \"31aad152-dcb7-472f-a0f8-d90ae972442b\") " Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.886253 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31aad152-dcb7-472f-a0f8-d90ae972442b-kube-api-access-725mj" (OuterVolumeSpecName: "kube-api-access-725mj") pod "31aad152-dcb7-472f-a0f8-d90ae972442b" (UID: "31aad152-dcb7-472f-a0f8-d90ae972442b"). InnerVolumeSpecName "kube-api-access-725mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.888048 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "31aad152-dcb7-472f-a0f8-d90ae972442b" (UID: "31aad152-dcb7-472f-a0f8-d90ae972442b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.929760 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31aad152-dcb7-472f-a0f8-d90ae972442b" (UID: "31aad152-dcb7-472f-a0f8-d90ae972442b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.982672 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.982700 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-725mj\" (UniqueName: \"kubernetes.io/projected/31aad152-dcb7-472f-a0f8-d90ae972442b-kube-api-access-725mj\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:46 crc kubenswrapper[4922]: I0218 11:56:46.982710 4922 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31aad152-dcb7-472f-a0f8-d90ae972442b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.014482 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c614b6a-8d46-4a07-89f6-1a7cc64dfdad" path="/var/lib/kubelet/pods/0c614b6a-8d46-4a07-89f6-1a7cc64dfdad/volumes" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.015378 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9fc515-5e15-41fc-8e76-b9f3af099a0f" path="/var/lib/kubelet/pods/9e9fc515-5e15-41fc-8e76-b9f3af099a0f/volumes" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.222887 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.233423 4922 generic.go:334] "Generic (PLEG): container finished" podID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerID="1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c" exitCode=0 Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.233501 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3873c9e-308f-46ea-ac8f-4ee78ca92235","Type":"ContainerDied","Data":"1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c"} Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.239416 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerID="5d696a6b456c46bcc03a5a4e42955c244ebb50a3f0ae3ca3ca3bec18dc6258ea" exitCode=0 Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.239496 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d79df67b-mg9kq" event={"ID":"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9","Type":"ContainerDied","Data":"5d696a6b456c46bcc03a5a4e42955c244ebb50a3f0ae3ca3ca3bec18dc6258ea"} Feb 18 11:56:47 crc kubenswrapper[4922]: W0218 11:56:47.241083 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd84d8c9_0a98_4f6b_b6da_887f4d294a38.slice/crio-4dbe71facafe23dee1111cdea2c33c997148d5887f41ef4c5ff76c38faf18673 WatchSource:0}: Error finding container 4dbe71facafe23dee1111cdea2c33c997148d5887f41ef4c5ff76c38faf18673: Status 404 returned error can't find the container with id 4dbe71facafe23dee1111cdea2c33c997148d5887f41ef4c5ff76c38faf18673 Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.243179 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mqfhx" event={"ID":"31aad152-dcb7-472f-a0f8-d90ae972442b","Type":"ContainerDied","Data":"9b5d6cd64e7bf271ae9d437b3dc8665c7fa56f4097bd261a86583c671e88f922"} Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.243216 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b5d6cd64e7bf271ae9d437b3dc8665c7fa56f4097bd261a86583c671e88f922" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.243192 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mqfhx" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.249949 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c7b84785b-f8lmj" event={"ID":"280ad3f5-10de-4dc8-866b-c7502c004835","Type":"ContainerStarted","Data":"527f7ccfe35683cb19a402f51d6fe18b2f99d6b702a02283209d599ae2f74449"} Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.250012 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c7b84785b-f8lmj" event={"ID":"280ad3f5-10de-4dc8-866b-c7502c004835","Type":"ContainerStarted","Data":"f4c4c35c75d436d0fa800652e1da8b91f7e3df925d6dd620eb4f664edd6c92b1"} Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.250027 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c7b84785b-f8lmj" event={"ID":"280ad3f5-10de-4dc8-866b-c7502c004835","Type":"ContainerStarted","Data":"18e2c3e882668ec70d6f9fd6f54e2aaecf4f14013bbcf2f79e39db1abbf09685"} Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.250528 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.250651 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.324509 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7c7b84785b-f8lmj" podStartSLOduration=2.324191738 podStartE2EDuration="2.324191738s" podCreationTimestamp="2026-02-18 11:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:47.322387562 +0000 UTC m=+1209.050091652" watchObservedRunningTime="2026-02-18 11:56:47.324191738 +0000 UTC m=+1209.051895808" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.394327 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.563725 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-676bd4cb85-2ggtc"] Feb 18 11:56:47 crc kubenswrapper[4922]: E0218 11:56:47.564117 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31aad152-dcb7-472f-a0f8-d90ae972442b" containerName="barbican-db-sync" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.564128 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="31aad152-dcb7-472f-a0f8-d90ae972442b" containerName="barbican-db-sync" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.564309 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="31aad152-dcb7-472f-a0f8-d90ae972442b" containerName="barbican-db-sync" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.565271 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.570695 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.571008 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pktbk" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.579935 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.617808 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-config-data\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.617856 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93be7893-0b89-4762-870d-f5878ecddb3b-logs\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.617889 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-config-data-custom\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.617951 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-combined-ca-bundle\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.617973 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9mb\" (UniqueName: \"kubernetes.io/projected/93be7893-0b89-4762-870d-f5878ecddb3b-kube-api-access-mr9mb\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.651829 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-676bd4cb85-2ggtc"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.705194 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-78995b5fcd-pmbbf"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.707077 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.709915 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.737796 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-config-data\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.738108 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93be7893-0b89-4762-870d-f5878ecddb3b-logs\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.738136 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-config-data-custom\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.738211 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-combined-ca-bundle\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.738233 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9mb\" (UniqueName: \"kubernetes.io/projected/93be7893-0b89-4762-870d-f5878ecddb3b-kube-api-access-mr9mb\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.740545 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93be7893-0b89-4762-870d-f5878ecddb3b-logs\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.761406 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-config-data-custom\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.765388 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-config-data\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.775492 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93be7893-0b89-4762-870d-f5878ecddb3b-combined-ca-bundle\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.782544 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78995b5fcd-pmbbf"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.807432 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9mb\" (UniqueName: \"kubernetes.io/projected/93be7893-0b89-4762-870d-f5878ecddb3b-kube-api-access-mr9mb\") pod \"barbican-worker-676bd4cb85-2ggtc\" (UID: \"93be7893-0b89-4762-870d-f5878ecddb3b\") " pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.826268 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-22mm8"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.828328 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.841673 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-22mm8"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.841860 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2664c9b6-f62a-4453-8771-8c273f5f9ec1-logs\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.841931 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-config-data-custom\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.842065 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf8ls\" (UniqueName: \"kubernetes.io/projected/2664c9b6-f62a-4453-8771-8c273f5f9ec1-kube-api-access-xf8ls\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.842168 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-config-data\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.842294 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-combined-ca-bundle\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.875198 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d677498bd-cxq98"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.877058 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.879864 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.886514 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d677498bd-cxq98"] Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945592 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-combined-ca-bundle\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945645 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-combined-ca-bundle\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945682 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945705 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data-custom\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945733 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2664c9b6-f62a-4453-8771-8c273f5f9ec1-logs\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945751 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945778 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdmzm\" (UniqueName: \"kubernetes.io/projected/53371b07-a65f-4fec-8564-bcd51df6c010-kube-api-access-jdmzm\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945807 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-svc\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945834 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-config-data-custom\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-config\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945891 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf8ls\" (UniqueName: \"kubernetes.io/projected/2664c9b6-f62a-4453-8771-8c273f5f9ec1-kube-api-access-xf8ls\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945935 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945965 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-config-data\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.945985 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53371b07-a65f-4fec-8564-bcd51df6c010-logs\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.946021 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.946562 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ptt\" (UniqueName: \"kubernetes.io/projected/d15f2ab3-202d-4241-a636-4d00475874aa-kube-api-access-b8ptt\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.947117 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2664c9b6-f62a-4453-8771-8c273f5f9ec1-logs\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.956123 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-config-data\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.959736 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-config-data-custom\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.963765 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2664c9b6-f62a-4453-8771-8c273f5f9ec1-combined-ca-bundle\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.971931 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-676bd4cb85-2ggtc" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.987793 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:56:47 crc kubenswrapper[4922]: I0218 11:56:47.988020 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf8ls\" (UniqueName: \"kubernetes.io/projected/2664c9b6-f62a-4453-8771-8c273f5f9ec1-kube-api-access-xf8ls\") pod \"barbican-keystone-listener-78995b5fcd-pmbbf\" (UID: \"2664c9b6-f62a-4453-8771-8c273f5f9ec1\") " pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.042911 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052014 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-combined-ca-bundle\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052124 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052159 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data-custom\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052197 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052232 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdmzm\" (UniqueName: \"kubernetes.io/projected/53371b07-a65f-4fec-8564-bcd51df6c010-kube-api-access-jdmzm\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052280 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-svc\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052345 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-config\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052575 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052615 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53371b07-a65f-4fec-8564-bcd51df6c010-logs\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052676 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.052744 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ptt\" (UniqueName: \"kubernetes.io/projected/d15f2ab3-202d-4241-a636-4d00475874aa-kube-api-access-b8ptt\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.054512 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53371b07-a65f-4fec-8564-bcd51df6c010-logs\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.058637 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-config\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.060230 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data-custom\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.062769 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.062999 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-combined-ca-bundle\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.063571 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.066070 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-svc\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.066257 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.067467 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.086800 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdmzm\" (UniqueName: \"kubernetes.io/projected/53371b07-a65f-4fec-8564-bcd51df6c010-kube-api-access-jdmzm\") pod \"barbican-api-6d677498bd-cxq98\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.102499 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ptt\" (UniqueName: \"kubernetes.io/projected/d15f2ab3-202d-4241-a636-4d00475874aa-kube-api-access-b8ptt\") pod \"dnsmasq-dns-5bdb97b9f9-22mm8\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.157804 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmwrr\" (UniqueName: \"kubernetes.io/projected/a3873c9e-308f-46ea-ac8f-4ee78ca92235-kube-api-access-jmwrr\") pod \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.158102 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-config-data\") pod \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.158125 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-scripts\") pod \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.158171 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-run-httpd\") pod \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.158249 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-sg-core-conf-yaml\") pod \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.158283 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-combined-ca-bundle\") pod \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.158352 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-log-httpd\") pod \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\" (UID: \"a3873c9e-308f-46ea-ac8f-4ee78ca92235\") " Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.161301 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3873c9e-308f-46ea-ac8f-4ee78ca92235" (UID: "a3873c9e-308f-46ea-ac8f-4ee78ca92235"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.164287 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3873c9e-308f-46ea-ac8f-4ee78ca92235" (UID: "a3873c9e-308f-46ea-ac8f-4ee78ca92235"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.175247 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3873c9e-308f-46ea-ac8f-4ee78ca92235-kube-api-access-jmwrr" (OuterVolumeSpecName: "kube-api-access-jmwrr") pod "a3873c9e-308f-46ea-ac8f-4ee78ca92235" (UID: "a3873c9e-308f-46ea-ac8f-4ee78ca92235"). InnerVolumeSpecName "kube-api-access-jmwrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.181706 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-scripts" (OuterVolumeSpecName: "scripts") pod "a3873c9e-308f-46ea-ac8f-4ee78ca92235" (UID: "a3873c9e-308f-46ea-ac8f-4ee78ca92235"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.201244 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3873c9e-308f-46ea-ac8f-4ee78ca92235" (UID: "a3873c9e-308f-46ea-ac8f-4ee78ca92235"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.263122 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.263164 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.263197 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3873c9e-308f-46ea-ac8f-4ee78ca92235-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.263208 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmwrr\" (UniqueName: \"kubernetes.io/projected/a3873c9e-308f-46ea-ac8f-4ee78ca92235-kube-api-access-jmwrr\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.263265 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.280936 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.290831 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"bdb6fddf-10f2-476b-822f-130f6fa12007","Type":"ContainerStarted","Data":"bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115"} Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.290874 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"bdb6fddf-10f2-476b-822f-130f6fa12007","Type":"ContainerStarted","Data":"767ebbd1c5208f481e0a0c9a07d1e2942ae4da643c6fc17067643a26968c3ac5"} Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.293487 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"cd84d8c9-0a98-4f6b-b6da-887f4d294a38","Type":"ContainerStarted","Data":"4708bc7e5a3ee308741085c1464cbadd339fb2255ed09bf71ea1b5148a7b42d4"} Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.293524 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"cd84d8c9-0a98-4f6b-b6da-887f4d294a38","Type":"ContainerStarted","Data":"4dbe71facafe23dee1111cdea2c33c997148d5887f41ef4c5ff76c38faf18673"} Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.312781 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.315767 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3873c9e-308f-46ea-ac8f-4ee78ca92235" (UID: "a3873c9e-308f-46ea-ac8f-4ee78ca92235"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.320420 4922 generic.go:334] "Generic (PLEG): container finished" podID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerID="a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a" exitCode=0 Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.322151 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.322638 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3873c9e-308f-46ea-ac8f-4ee78ca92235","Type":"ContainerDied","Data":"a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a"} Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.322671 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3873c9e-308f-46ea-ac8f-4ee78ca92235","Type":"ContainerDied","Data":"d077394e189534489b8a5cebe609760985017f8cfefceb856dec5e4e90cc20e1"} Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.322692 4922 scope.go:117] "RemoveContainer" containerID="1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.329275 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.329247279 podStartE2EDuration="2.329247279s" podCreationTimestamp="2026-02-18 11:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:48.316076626 +0000 UTC m=+1210.043780726" watchObservedRunningTime="2026-02-18 11:56:48.329247279 +0000 UTC m=+1210.056951359" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.341348 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.341331124 podStartE2EDuration="2.341331124s" podCreationTimestamp="2026-02-18 11:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:48.335874216 +0000 UTC m=+1210.063578296" watchObservedRunningTime="2026-02-18 11:56:48.341331124 +0000 UTC m=+1210.069035204" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.365638 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.391258 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9d79df67b-mg9kq" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.410157 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-config-data" (OuterVolumeSpecName: "config-data") pod "a3873c9e-308f-46ea-ac8f-4ee78ca92235" (UID: "a3873c9e-308f-46ea-ac8f-4ee78ca92235"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.454786 4922 scope.go:117] "RemoveContainer" containerID="a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.455761 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-676bd4cb85-2ggtc"] Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.470137 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3873c9e-308f-46ea-ac8f-4ee78ca92235-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.518760 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.522579 4922 scope.go:117] "RemoveContainer" containerID="1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c" Feb 18 11:56:48 crc kubenswrapper[4922]: E0218 11:56:48.524975 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c\": container with ID starting with 1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c not found: ID does not exist" containerID="1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.525038 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c"} err="failed to get container status \"1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c\": rpc error: code = NotFound desc = could not find container \"1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c\": container with ID starting with 1f093722d754d2692788936f8e0c90e9361058a061c775d23bfb1b1c56a36b4c not found: ID does not exist" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.525082 4922 scope.go:117] "RemoveContainer" containerID="a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a" Feb 18 11:56:48 crc kubenswrapper[4922]: E0218 11:56:48.526488 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a\": container with ID starting with a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a not found: ID does not exist" containerID="a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.526602 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a"} err="failed to get container status \"a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a\": rpc error: code = NotFound desc = could not find container \"a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a\": container with ID starting with a7b1fa49f2a63b5b267b8b5143ef790e17f341b707fb18ac5be2dd797d33cc2a not found: ID does not exist" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.641200 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.815876 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-78995b5fcd-pmbbf"] Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.834349 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.843441 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.854673 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:56:48 crc kubenswrapper[4922]: E0218 11:56:48.855115 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerName="proxy-httpd" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.855138 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerName="proxy-httpd" Feb 18 11:56:48 crc kubenswrapper[4922]: E0218 11:56:48.855191 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerName="ceilometer-notification-agent" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.855198 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerName="ceilometer-notification-agent" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.855393 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerName="ceilometer-notification-agent" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.855412 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" containerName="proxy-httpd" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.857172 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.861082 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.876118 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.876693 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.990565 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3873c9e-308f-46ea-ac8f-4ee78ca92235" path="/var/lib/kubelet/pods/a3873c9e-308f-46ea-ac8f-4ee78ca92235/volumes" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.990888 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-config-data\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.991698 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt6qk\" (UniqueName: \"kubernetes.io/projected/31a682bb-b881-47f5-960b-c6ae54c24275-kube-api-access-tt6qk\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.991976 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-log-httpd\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.992279 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.992326 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-scripts\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.992415 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-run-httpd\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:48 crc kubenswrapper[4922]: I0218 11:56:48.999733 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.094149 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d677498bd-cxq98"] Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.101969 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt6qk\" (UniqueName: \"kubernetes.io/projected/31a682bb-b881-47f5-960b-c6ae54c24275-kube-api-access-tt6qk\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.102173 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-log-httpd\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.102418 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.102541 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-scripts\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.102658 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-run-httpd\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.102770 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.102968 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-config-data\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.111576 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-log-httpd\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.113779 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-run-httpd\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.119110 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-scripts\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.119110 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.125270 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-config-data\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.133925 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt6qk\" (UniqueName: \"kubernetes.io/projected/31a682bb-b881-47f5-960b-c6ae54c24275-kube-api-access-tt6qk\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.186147 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.209795 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.227461 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-22mm8"] Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.273988 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-b854f8786-pls2t" Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.383897 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d677498bd-cxq98" event={"ID":"53371b07-a65f-4fec-8564-bcd51df6c010","Type":"ContainerStarted","Data":"6019911b846afdb67dccdb7e03471f7cbf11c9aa081bfb07a432273f6bc2c54b"} Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.401775 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676bd4cb85-2ggtc" event={"ID":"93be7893-0b89-4762-870d-f5878ecddb3b","Type":"ContainerStarted","Data":"b7ac1e08c61e2809e23e4e7b68ab2c87a26179cf441a50aa9a12026cabe3e74a"} Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.412525 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" event={"ID":"d15f2ab3-202d-4241-a636-4d00475874aa","Type":"ContainerStarted","Data":"b70a4c113ae5df77985adf93070afceb0e7e0972f2d420cb43d3ae8ed3526536"} Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.445313 4922 generic.go:334] "Generic (PLEG): container finished" podID="d7852f85-b8c5-458e-901c-3659c5ed2713" containerID="483ace8c8816a57983828e0b595d4b4e906ff56406e775bb5aac307ba89d5e30" exitCode=0 Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.445461 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvpx7" event={"ID":"d7852f85-b8c5-458e-901c-3659c5ed2713","Type":"ContainerDied","Data":"483ace8c8816a57983828e0b595d4b4e906ff56406e775bb5aac307ba89d5e30"} Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.456553 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" event={"ID":"2664c9b6-f62a-4453-8771-8c273f5f9ec1","Type":"ContainerStarted","Data":"108bfbb2b14078785a032b05d82d62866bc5e08d6fa3e8a5b2f4eae587e2386f"} Feb 18 11:56:49 crc kubenswrapper[4922]: I0218 11:56:49.487163 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 11:56:50 crc kubenswrapper[4922]: I0218 11:56:50.103585 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:56:50 crc kubenswrapper[4922]: I0218 11:56:50.501843 4922 generic.go:334] "Generic (PLEG): container finished" podID="d15f2ab3-202d-4241-a636-4d00475874aa" containerID="007affd267df0eaaea70525abb6cf65b3773291056b908124aaf8cd367384660" exitCode=0 Feb 18 11:56:50 crc kubenswrapper[4922]: I0218 11:56:50.501991 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" event={"ID":"d15f2ab3-202d-4241-a636-4d00475874aa","Type":"ContainerDied","Data":"007affd267df0eaaea70525abb6cf65b3773291056b908124aaf8cd367384660"} Feb 18 11:56:50 crc kubenswrapper[4922]: I0218 11:56:50.510053 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerStarted","Data":"2f2fa694e60fe2de69033e6edac945c54110ee47bb40b70b10dec8b4f330dcd3"} Feb 18 11:56:50 crc kubenswrapper[4922]: I0218 11:56:50.568851 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d677498bd-cxq98" event={"ID":"53371b07-a65f-4fec-8564-bcd51df6c010","Type":"ContainerStarted","Data":"d307cc88ce59602f69621bcf0371da0ee28a8272ec22fc60147af239fd78badf"} Feb 18 11:56:50 crc kubenswrapper[4922]: I0218 11:56:50.569163 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d677498bd-cxq98" event={"ID":"53371b07-a65f-4fec-8564-bcd51df6c010","Type":"ContainerStarted","Data":"95063da6319615da44d0b1f12dd9abb6a109354894bb0cb11f9c25d007446b4c"} Feb 18 11:56:50 crc kubenswrapper[4922]: I0218 11:56:50.570147 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:50 crc kubenswrapper[4922]: I0218 11:56:50.570183 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.183802 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d677498bd-cxq98" podStartSLOduration=4.183781428 podStartE2EDuration="4.183781428s" podCreationTimestamp="2026-02-18 11:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:50.616699001 +0000 UTC m=+1212.344403081" watchObservedRunningTime="2026-02-18 11:56:51.183781428 +0000 UTC m=+1212.911485528" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.195462 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-794d859fd8-fbbnx"] Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.202409 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.209310 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.209621 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.218305 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-794d859fd8-fbbnx"] Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.323381 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-config-data-custom\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.323433 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-public-tls-certs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.323592 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-config-data\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.323667 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-internal-tls-certs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.323702 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrm7\" (UniqueName: \"kubernetes.io/projected/d8d3eec1-763e-4874-b2af-19401e383fed-kube-api-access-ngrm7\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.323719 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-combined-ca-bundle\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.323738 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d3eec1-763e-4874-b2af-19401e383fed-logs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.428484 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-config-data\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.428651 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-internal-tls-certs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.428714 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngrm7\" (UniqueName: \"kubernetes.io/projected/d8d3eec1-763e-4874-b2af-19401e383fed-kube-api-access-ngrm7\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.428742 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-combined-ca-bundle\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.428774 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d3eec1-763e-4874-b2af-19401e383fed-logs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.428832 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-config-data-custom\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.428856 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-public-tls-certs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.429497 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d3eec1-763e-4874-b2af-19401e383fed-logs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.434270 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-combined-ca-bundle\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.434882 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-config-data\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.436063 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-internal-tls-certs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.439617 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-config-data-custom\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.440516 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d3eec1-763e-4874-b2af-19401e383fed-public-tls-certs\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.456240 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngrm7\" (UniqueName: \"kubernetes.io/projected/d8d3eec1-763e-4874-b2af-19401e383fed-kube-api-access-ngrm7\") pod \"barbican-api-794d859fd8-fbbnx\" (UID: \"d8d3eec1-763e-4874-b2af-19401e383fed\") " pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.536932 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.588578 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" event={"ID":"d15f2ab3-202d-4241-a636-4d00475874aa","Type":"ContainerStarted","Data":"8439e696b2fd32672982849d8b37ce88e9906b06a73598f166488f6da611b3af"} Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.588921 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.613860 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" podStartSLOduration=4.613825201 podStartE2EDuration="4.613825201s" podCreationTimestamp="2026-02-18 11:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:51.609217414 +0000 UTC m=+1213.336921494" watchObservedRunningTime="2026-02-18 11:56:51.613825201 +0000 UTC m=+1213.341529291" Feb 18 11:56:51 crc kubenswrapper[4922]: I0218 11:56:51.700243 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.550499 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.600685 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7852f85-b8c5-458e-901c-3659c5ed2713-etc-machine-id\") pod \"d7852f85-b8c5-458e-901c-3659c5ed2713\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.600858 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-scripts\") pod \"d7852f85-b8c5-458e-901c-3659c5ed2713\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.600936 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h8nh\" (UniqueName: \"kubernetes.io/projected/d7852f85-b8c5-458e-901c-3659c5ed2713-kube-api-access-4h8nh\") pod \"d7852f85-b8c5-458e-901c-3659c5ed2713\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.600999 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-combined-ca-bundle\") pod \"d7852f85-b8c5-458e-901c-3659c5ed2713\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.601027 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-config-data\") pod \"d7852f85-b8c5-458e-901c-3659c5ed2713\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.601164 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-db-sync-config-data\") pod \"d7852f85-b8c5-458e-901c-3659c5ed2713\" (UID: \"d7852f85-b8c5-458e-901c-3659c5ed2713\") " Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.605845 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7852f85-b8c5-458e-901c-3659c5ed2713-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d7852f85-b8c5-458e-901c-3659c5ed2713" (UID: "d7852f85-b8c5-458e-901c-3659c5ed2713"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.617193 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-scripts" (OuterVolumeSpecName: "scripts") pod "d7852f85-b8c5-458e-901c-3659c5ed2713" (UID: "d7852f85-b8c5-458e-901c-3659c5ed2713"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.622869 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d7852f85-b8c5-458e-901c-3659c5ed2713" (UID: "d7852f85-b8c5-458e-901c-3659c5ed2713"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.626703 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7852f85-b8c5-458e-901c-3659c5ed2713-kube-api-access-4h8nh" (OuterVolumeSpecName: "kube-api-access-4h8nh") pod "d7852f85-b8c5-458e-901c-3659c5ed2713" (UID: "d7852f85-b8c5-458e-901c-3659c5ed2713"). InnerVolumeSpecName "kube-api-access-4h8nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.630035 4922 generic.go:334] "Generic (PLEG): container finished" podID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" containerID="f6b3d065bfd75b2cfcd638eb81caf2052d3e13f6ce152807bf20c4d48f24622e" exitCode=0 Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.630161 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-st9pz" event={"ID":"855fb3ec-e473-4a99-a94f-cc96dda6d9c4","Type":"ContainerDied","Data":"f6b3d065bfd75b2cfcd638eb81caf2052d3e13f6ce152807bf20c4d48f24622e"} Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.635417 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rvpx7" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.635652 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rvpx7" event={"ID":"d7852f85-b8c5-458e-901c-3659c5ed2713","Type":"ContainerDied","Data":"b5010985da36e7523bd0bc3fdfdcc8c443c58a1cc63438b6b3af9b7f64ca52d5"} Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.635676 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5010985da36e7523bd0bc3fdfdcc8c443c58a1cc63438b6b3af9b7f64ca52d5" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.698908 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-config-data" (OuterVolumeSpecName: "config-data") pod "d7852f85-b8c5-458e-901c-3659c5ed2713" (UID: "d7852f85-b8c5-458e-901c-3659c5ed2713"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.707013 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.707061 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h8nh\" (UniqueName: \"kubernetes.io/projected/d7852f85-b8c5-458e-901c-3659c5ed2713-kube-api-access-4h8nh\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.707078 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.707104 4922 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.707118 4922 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7852f85-b8c5-458e-901c-3659c5ed2713-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.725405 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7852f85-b8c5-458e-901c-3659c5ed2713" (UID: "d7852f85-b8c5-458e-901c-3659c5ed2713"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.809032 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7852f85-b8c5-458e-901c-3659c5ed2713-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:52 crc kubenswrapper[4922]: I0218 11:56:52.964450 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-794d859fd8-fbbnx"] Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.234560 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 11:56:53 crc kubenswrapper[4922]: E0218 11:56:53.234973 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7852f85-b8c5-458e-901c-3659c5ed2713" containerName="cinder-db-sync" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.234992 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7852f85-b8c5-458e-901c-3659c5ed2713" containerName="cinder-db-sync" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.235156 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7852f85-b8c5-458e-901c-3659c5ed2713" containerName="cinder-db-sync" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.235802 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.239185 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5hxjt" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.239449 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.239617 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.263324 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.322818 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/245b1cb9-d98f-4875-adf6-ab887f76849d-openstack-config-secret\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.322914 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b1cb9-d98f-4875-adf6-ab887f76849d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.322957 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7gp9\" (UniqueName: \"kubernetes.io/projected/245b1cb9-d98f-4875-adf6-ab887f76849d-kube-api-access-t7gp9\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.322976 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/245b1cb9-d98f-4875-adf6-ab887f76849d-openstack-config\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.424911 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b1cb9-d98f-4875-adf6-ab887f76849d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.424985 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7gp9\" (UniqueName: \"kubernetes.io/projected/245b1cb9-d98f-4875-adf6-ab887f76849d-kube-api-access-t7gp9\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.425005 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/245b1cb9-d98f-4875-adf6-ab887f76849d-openstack-config\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.425192 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/245b1cb9-d98f-4875-adf6-ab887f76849d-openstack-config-secret\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.426236 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/245b1cb9-d98f-4875-adf6-ab887f76849d-openstack-config\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.429787 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/245b1cb9-d98f-4875-adf6-ab887f76849d-openstack-config-secret\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.431600 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b1cb9-d98f-4875-adf6-ab887f76849d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.444077 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7gp9\" (UniqueName: \"kubernetes.io/projected/245b1cb9-d98f-4875-adf6-ab887f76849d-kube-api-access-t7gp9\") pod \"openstackclient\" (UID: \"245b1cb9-d98f-4875-adf6-ab887f76849d\") " pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.568053 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.666031 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794d859fd8-fbbnx" event={"ID":"d8d3eec1-763e-4874-b2af-19401e383fed","Type":"ContainerStarted","Data":"8a38b5b6376e30dcea7ce225715f54f7135ad4fc04a4edab5936124dc4064df7"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.666084 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794d859fd8-fbbnx" event={"ID":"d8d3eec1-763e-4874-b2af-19401e383fed","Type":"ContainerStarted","Data":"1a28858de0c713418f4f9ee6545f7adadbfc9acbc4e0e4d9bc3221e41de85448"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.666100 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794d859fd8-fbbnx" event={"ID":"d8d3eec1-763e-4874-b2af-19401e383fed","Type":"ContainerStarted","Data":"21e3cc36af7c6097e3f661f26114d5be79ab01892f5cbbbbd05ee60714014159"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.667288 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.667326 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.674269 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676bd4cb85-2ggtc" event={"ID":"93be7893-0b89-4762-870d-f5878ecddb3b","Type":"ContainerStarted","Data":"fae49d17506120b2a78bcfca5e18c4d6fd434cdafa79a896bc1f733849d9800b"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.674313 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676bd4cb85-2ggtc" event={"ID":"93be7893-0b89-4762-870d-f5878ecddb3b","Type":"ContainerStarted","Data":"79b1f242c2b5b0800d2c146fe57c62199ca37df35738ebc40daa8f9fac0612d4"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.683236 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerStarted","Data":"71670bed8cf5414a16d8e3139affef7f174a018a3fa1f96f685382e7b236b20a"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.683287 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerStarted","Data":"ba51c32817af83f75090d7c0f3e6b11f618358a906fe3fa4abfa17b358890f4c"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.688064 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" event={"ID":"2664c9b6-f62a-4453-8771-8c273f5f9ec1","Type":"ContainerStarted","Data":"756d7e01bc6c98bf139561d99c15433814b74887d986df3ef8da0eb944781525"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.688120 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" event={"ID":"2664c9b6-f62a-4453-8771-8c273f5f9ec1","Type":"ContainerStarted","Data":"af357bdbc40a1eae94c5dcecdd2fd47279eb9382c9a8f5047d146d2b14e62c89"} Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.712743 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-794d859fd8-fbbnx" podStartSLOduration=2.712722596 podStartE2EDuration="2.712722596s" podCreationTimestamp="2026-02-18 11:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:56:53.697543612 +0000 UTC m=+1215.425247712" watchObservedRunningTime="2026-02-18 11:56:53.712722596 +0000 UTC m=+1215.440426676" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.782110 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-676bd4cb85-2ggtc" podStartSLOduration=2.850595521 podStartE2EDuration="6.782089169s" podCreationTimestamp="2026-02-18 11:56:47 +0000 UTC" firstStartedPulling="2026-02-18 11:56:48.48712123 +0000 UTC m=+1210.214825310" lastFinishedPulling="2026-02-18 11:56:52.418614878 +0000 UTC m=+1214.146318958" observedRunningTime="2026-02-18 11:56:53.729043258 +0000 UTC m=+1215.456747338" watchObservedRunningTime="2026-02-18 11:56:53.782089169 +0000 UTC m=+1215.509793249" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.888335 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-78995b5fcd-pmbbf" podStartSLOduration=3.243567477 podStartE2EDuration="6.888301295s" podCreationTimestamp="2026-02-18 11:56:47 +0000 UTC" firstStartedPulling="2026-02-18 11:56:48.773221563 +0000 UTC m=+1210.500925643" lastFinishedPulling="2026-02-18 11:56:52.417955391 +0000 UTC m=+1214.145659461" observedRunningTime="2026-02-18 11:56:53.793712523 +0000 UTC m=+1215.521416603" watchObservedRunningTime="2026-02-18 11:56:53.888301295 +0000 UTC m=+1215.616005375" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.898326 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.907107 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.918037 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xpg9l" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.919035 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.919220 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.933535 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.956512 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.956571 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r77f\" (UniqueName: \"kubernetes.io/projected/74a83ecb-de31-4767-a178-bccf8a37e93e-kube-api-access-4r77f\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.956594 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.956637 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a83ecb-de31-4767-a178-bccf8a37e93e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.956655 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-scripts\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.956747 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:53 crc kubenswrapper[4922]: I0218 11:56:53.961446 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.020563 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-22mm8"] Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.065596 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a83ecb-de31-4767-a178-bccf8a37e93e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.065659 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-scripts\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.065960 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.066146 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.066208 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r77f\" (UniqueName: \"kubernetes.io/projected/74a83ecb-de31-4767-a178-bccf8a37e93e-kube-api-access-4r77f\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.066243 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.069824 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a83ecb-de31-4767-a178-bccf8a37e93e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.077478 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-xfpt6"] Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.079780 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.080783 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-scripts\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.094335 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.099080 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.099217 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.102403 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r77f\" (UniqueName: \"kubernetes.io/projected/74a83ecb-de31-4767-a178-bccf8a37e93e-kube-api-access-4r77f\") pod \"cinder-scheduler-0\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.123472 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-xfpt6"] Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.161860 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.164508 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.167758 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.167817 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-svc\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.167887 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.167908 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.167992 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-config\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.168059 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgkxw\" (UniqueName: \"kubernetes.io/projected/c5a7cb59-a6a3-4653-a63a-5942277f6663-kube-api-access-rgkxw\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.168546 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.233569 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.265379 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270712 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270746 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-svc\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270796 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tqmb\" (UniqueName: \"kubernetes.io/projected/ed1bc0f3-a613-4565-b1f5-e962556acb00-kube-api-access-2tqmb\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270827 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270846 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270877 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270919 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-scripts\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270938 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data-custom\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270958 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-config\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270977 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed1bc0f3-a613-4565-b1f5-e962556acb00-logs\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.270998 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.271043 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed1bc0f3-a613-4565-b1f5-e962556acb00-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.271068 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgkxw\" (UniqueName: \"kubernetes.io/projected/c5a7cb59-a6a3-4653-a63a-5942277f6663-kube-api-access-rgkxw\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.272600 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-swift-storage-0\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.272659 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-svc\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.273094 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-config\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.273311 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-nb\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.273506 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-sb\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.297401 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgkxw\" (UniqueName: \"kubernetes.io/projected/c5a7cb59-a6a3-4653-a63a-5942277f6663-kube-api-access-rgkxw\") pod \"dnsmasq-dns-dc6887bc5-xfpt6\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.374020 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tqmb\" (UniqueName: \"kubernetes.io/projected/ed1bc0f3-a613-4565-b1f5-e962556acb00-kube-api-access-2tqmb\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.375695 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.375867 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-scripts\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.375950 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data-custom\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.376189 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed1bc0f3-a613-4565-b1f5-e962556acb00-logs\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.376349 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.376962 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed1bc0f3-a613-4565-b1f5-e962556acb00-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.378917 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed1bc0f3-a613-4565-b1f5-e962556acb00-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.379449 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed1bc0f3-a613-4565-b1f5-e962556acb00-logs\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.384977 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.394005 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-scripts\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.394573 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data-custom\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.408048 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.441781 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tqmb\" (UniqueName: \"kubernetes.io/projected/ed1bc0f3-a613-4565-b1f5-e962556acb00-kube-api-access-2tqmb\") pod \"cinder-api-0\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " pod="openstack/cinder-api-0" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.553891 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.644446 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.698426 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" podUID="d15f2ab3-202d-4241-a636-4d00475874aa" containerName="dnsmasq-dns" containerID="cri-o://8439e696b2fd32672982849d8b37ce88e9906b06a73598f166488f6da611b3af" gracePeriod=10 Feb 18 11:56:54 crc kubenswrapper[4922]: I0218 11:56:54.716535 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.018013 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-st9pz" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.136735 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmckq\" (UniqueName: \"kubernetes.io/projected/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-kube-api-access-zmckq\") pod \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.137133 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-config-data\") pod \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.137250 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-combined-ca-bundle\") pod \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.137350 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-db-sync-config-data\") pod \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\" (UID: \"855fb3ec-e473-4a99-a94f-cc96dda6d9c4\") " Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.164059 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.212513 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "855fb3ec-e473-4a99-a94f-cc96dda6d9c4" (UID: "855fb3ec-e473-4a99-a94f-cc96dda6d9c4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:55 crc kubenswrapper[4922]: W0218 11:56:55.221650 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74a83ecb_de31_4767_a178_bccf8a37e93e.slice/crio-94eaad4dfdfd79b9d6e5320a8b3266331430b86c446a5d0f1d18cea2bf387427 WatchSource:0}: Error finding container 94eaad4dfdfd79b9d6e5320a8b3266331430b86c446a5d0f1d18cea2bf387427: Status 404 returned error can't find the container with id 94eaad4dfdfd79b9d6e5320a8b3266331430b86c446a5d0f1d18cea2bf387427 Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.227504 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-kube-api-access-zmckq" (OuterVolumeSpecName: "kube-api-access-zmckq") pod "855fb3ec-e473-4a99-a94f-cc96dda6d9c4" (UID: "855fb3ec-e473-4a99-a94f-cc96dda6d9c4"). InnerVolumeSpecName "kube-api-access-zmckq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.239418 4922 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.239449 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmckq\" (UniqueName: \"kubernetes.io/projected/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-kube-api-access-zmckq\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.322072 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "855fb3ec-e473-4a99-a94f-cc96dda6d9c4" (UID: "855fb3ec-e473-4a99-a94f-cc96dda6d9c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.344648 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.385349 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-config-data" (OuterVolumeSpecName: "config-data") pod "855fb3ec-e473-4a99-a94f-cc96dda6d9c4" (UID: "855fb3ec-e473-4a99-a94f-cc96dda6d9c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.446936 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855fb3ec-e473-4a99-a94f-cc96dda6d9c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.692836 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.748327 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerStarted","Data":"da256f656dc5f88aa583a65920c90cb68f469934b1ee4eea56bd44df48dcdf88"} Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.759788 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74a83ecb-de31-4767-a178-bccf8a37e93e","Type":"ContainerStarted","Data":"94eaad4dfdfd79b9d6e5320a8b3266331430b86c446a5d0f1d18cea2bf387427"} Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.776121 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-st9pz" event={"ID":"855fb3ec-e473-4a99-a94f-cc96dda6d9c4","Type":"ContainerDied","Data":"bb03e387f02f6078ba9ca11f5028b069ffe62c115543a3d26dcd8e4428a02edd"} Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.776163 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb03e387f02f6078ba9ca11f5028b069ffe62c115543a3d26dcd8e4428a02edd" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.776226 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-st9pz" Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.796734 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"245b1cb9-d98f-4875-adf6-ab887f76849d","Type":"ContainerStarted","Data":"4e8f10c7492476510baebc3943dff72a9dc028154bb64b93d0af81c6c13b9994"} Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.905632 4922 generic.go:334] "Generic (PLEG): container finished" podID="d15f2ab3-202d-4241-a636-4d00475874aa" containerID="8439e696b2fd32672982849d8b37ce88e9906b06a73598f166488f6da611b3af" exitCode=0 Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.906802 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" event={"ID":"d15f2ab3-202d-4241-a636-4d00475874aa","Type":"ContainerDied","Data":"8439e696b2fd32672982849d8b37ce88e9906b06a73598f166488f6da611b3af"} Feb 18 11:56:55 crc kubenswrapper[4922]: I0218 11:56:55.958697 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-xfpt6"] Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.015581 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.176959 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-config\") pod \"d15f2ab3-202d-4241-a636-4d00475874aa\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.177135 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-swift-storage-0\") pod \"d15f2ab3-202d-4241-a636-4d00475874aa\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.177191 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-nb\") pod \"d15f2ab3-202d-4241-a636-4d00475874aa\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.177278 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-sb\") pod \"d15f2ab3-202d-4241-a636-4d00475874aa\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.177393 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-svc\") pod \"d15f2ab3-202d-4241-a636-4d00475874aa\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.177683 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8ptt\" (UniqueName: \"kubernetes.io/projected/d15f2ab3-202d-4241-a636-4d00475874aa-kube-api-access-b8ptt\") pod \"d15f2ab3-202d-4241-a636-4d00475874aa\" (UID: \"d15f2ab3-202d-4241-a636-4d00475874aa\") " Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.245166 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15f2ab3-202d-4241-a636-4d00475874aa-kube-api-access-b8ptt" (OuterVolumeSpecName: "kube-api-access-b8ptt") pod "d15f2ab3-202d-4241-a636-4d00475874aa" (UID: "d15f2ab3-202d-4241-a636-4d00475874aa"). InnerVolumeSpecName "kube-api-access-b8ptt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.284464 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8ptt\" (UniqueName: \"kubernetes.io/projected/d15f2ab3-202d-4241-a636-4d00475874aa-kube-api-access-b8ptt\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.327156 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d15f2ab3-202d-4241-a636-4d00475874aa" (UID: "d15f2ab3-202d-4241-a636-4d00475874aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.370102 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d15f2ab3-202d-4241-a636-4d00475874aa" (UID: "d15f2ab3-202d-4241-a636-4d00475874aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.387437 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.388683 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.388715 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.389279 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d15f2ab3-202d-4241-a636-4d00475874aa" (UID: "d15f2ab3-202d-4241-a636-4d00475874aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.394740 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-config" (OuterVolumeSpecName: "config") pod "d15f2ab3-202d-4241-a636-4d00475874aa" (UID: "d15f2ab3-202d-4241-a636-4d00475874aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.447631 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d15f2ab3-202d-4241-a636-4d00475874aa" (UID: "d15f2ab3-202d-4241-a636-4d00475874aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.492929 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.493201 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.493292 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d15f2ab3-202d-4241-a636-4d00475874aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.705697 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.741510 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.745760 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-xfpt6"] Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.815831 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v"] Feb 18 11:56:56 crc kubenswrapper[4922]: E0218 11:56:56.816438 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" containerName="glance-db-sync" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.816459 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" containerName="glance-db-sync" Feb 18 11:56:56 crc kubenswrapper[4922]: E0218 11:56:56.816484 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15f2ab3-202d-4241-a636-4d00475874aa" containerName="init" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.816492 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15f2ab3-202d-4241-a636-4d00475874aa" containerName="init" Feb 18 11:56:56 crc kubenswrapper[4922]: E0218 11:56:56.816507 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15f2ab3-202d-4241-a636-4d00475874aa" containerName="dnsmasq-dns" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.816515 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15f2ab3-202d-4241-a636-4d00475874aa" containerName="dnsmasq-dns" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.821647 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15f2ab3-202d-4241-a636-4d00475874aa" containerName="dnsmasq-dns" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.821719 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" containerName="glance-db-sync" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.823457 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.825838 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.835099 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.860478 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v"] Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.917864 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.917992 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.918093 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.918133 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-config\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.918264 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2htl\" (UniqueName: \"kubernetes.io/projected/d61bee1b-0ee4-4c97-8d5d-8655406f124c-kube-api-access-k2htl\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.918414 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.956791 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" event={"ID":"c5a7cb59-a6a3-4653-a63a-5942277f6663","Type":"ContainerStarted","Data":"1944f842e23fb615957951883e249b9c3deafa98b8f8ab3f95ea984230f6d29c"} Feb 18 11:56:56 crc kubenswrapper[4922]: I0218 11:56:56.971063 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed1bc0f3-a613-4565-b1f5-e962556acb00","Type":"ContainerStarted","Data":"80ecfe632f1eb5d18cb8c8fa491a890fbee92c07e3b873e227a8dfcd9bb02d67"} Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.020013 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.032848 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.032988 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.033221 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.033432 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.033501 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-config\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.033715 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2htl\" (UniqueName: \"kubernetes.io/projected/d61bee1b-0ee4-4c97-8d5d-8655406f124c-kube-api-access-k2htl\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.045384 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.046002 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.046609 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.047189 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.054805 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-config\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.098128 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.112323 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bdb97b9f9-22mm8" event={"ID":"d15f2ab3-202d-4241-a636-4d00475874aa","Type":"ContainerDied","Data":"b70a4c113ae5df77985adf93070afceb0e7e0972f2d420cb43d3ae8ed3526536"} Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.112445 4922 scope.go:117] "RemoveContainer" containerID="8439e696b2fd32672982849d8b37ce88e9906b06a73598f166488f6da611b3af" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.140983 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.196346 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.216181 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2htl\" (UniqueName: \"kubernetes.io/projected/d61bee1b-0ee4-4c97-8d5d-8655406f124c-kube-api-access-k2htl\") pod \"dnsmasq-dns-5cc8b5d5c5-jrg5v\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.249668 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-22mm8"] Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.250665 4922 scope.go:117] "RemoveContainer" containerID="007affd267df0eaaea70525abb6cf65b3773291056b908124aaf8cd367384660" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.272133 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.272801 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bdb97b9f9-22mm8"] Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.451425 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.453421 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.460107 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.460417 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jr8f4" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.460547 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.463515 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.610068 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.610652 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.610718 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-logs\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.610818 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.610854 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.610908 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbhgh\" (UniqueName: \"kubernetes.io/projected/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-kube-api-access-zbhgh\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.611056 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.713951 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.714003 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.714032 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbhgh\" (UniqueName: \"kubernetes.io/projected/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-kube-api-access-zbhgh\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.714103 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.714152 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.714178 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.714212 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-logs\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.714838 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-logs\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.721548 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.726610 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.727302 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.731494 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.733631 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.760703 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbhgh\" (UniqueName: \"kubernetes.io/projected/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-kube-api-access-zbhgh\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.775599 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " pod="openstack/glance-default-external-api-0" Feb 18 11:56:57 crc kubenswrapper[4922]: I0218 11:56:57.886022 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.004101 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.005991 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.013144 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.047447 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.066774 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v"] Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.082732 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerStarted","Data":"84badbb54b04871d4102bc412504ae7f5911b23870f7e7ded6df793d7986d02b"} Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.083162 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.094770 4922 generic.go:334] "Generic (PLEG): container finished" podID="c5a7cb59-a6a3-4653-a63a-5942277f6663" containerID="828b6208153495b0d3b9a186701d25e980bb083d35eb836c7903452c2dfbbadf" exitCode=0 Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.095121 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" event={"ID":"c5a7cb59-a6a3-4653-a63a-5942277f6663","Type":"ContainerDied","Data":"828b6208153495b0d3b9a186701d25e980bb083d35eb836c7903452c2dfbbadf"} Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.111494 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.703494765 podStartE2EDuration="10.111470397s" podCreationTimestamp="2026-02-18 11:56:48 +0000 UTC" firstStartedPulling="2026-02-18 11:56:50.164631081 +0000 UTC m=+1211.892335161" lastFinishedPulling="2026-02-18 11:56:57.572606713 +0000 UTC m=+1219.300310793" observedRunningTime="2026-02-18 11:56:58.109152338 +0000 UTC m=+1219.836856428" watchObservedRunningTime="2026-02-18 11:56:58.111470397 +0000 UTC m=+1219.839174477" Feb 18 11:56:58 crc kubenswrapper[4922]: W0218 11:56:58.113083 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd61bee1b_0ee4_4c97_8d5d_8655406f124c.slice/crio-18a7a7ab6ce87177724ccea1ae9a737d84614c82a96458fedc35807aee0f2e52 WatchSource:0}: Error finding container 18a7a7ab6ce87177724ccea1ae9a737d84614c82a96458fedc35807aee0f2e52: Status 404 returned error can't find the container with id 18a7a7ab6ce87177724ccea1ae9a737d84614c82a96458fedc35807aee0f2e52 Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.122879 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed1bc0f3-a613-4565-b1f5-e962556acb00","Type":"ContainerStarted","Data":"1c2064f4a7cb5c8c928c435089f458b706b8fa933a0140eb0fd478bbb13155df"} Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.124100 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.124130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.124147 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.124214 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.124248 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2trdb\" (UniqueName: \"kubernetes.io/projected/16d2c710-0adb-4543-8e7c-7e318d2e0091-kube-api-access-2trdb\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.124271 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-logs\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.124320 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.225690 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.226751 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.226777 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.226798 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.226853 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.226915 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2trdb\" (UniqueName: \"kubernetes.io/projected/16d2c710-0adb-4543-8e7c-7e318d2e0091-kube-api-access-2trdb\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.226946 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-logs\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.229206 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-logs\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.231712 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.232322 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.236574 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.236715 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-scripts\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.266179 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-config-data\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.275254 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2trdb\" (UniqueName: \"kubernetes.io/projected/16d2c710-0adb-4543-8e7c-7e318d2e0091-kube-api-access-2trdb\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.359834 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.364214 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.389874 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9d79df67b-mg9kq" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Feb 18 11:56:58 crc kubenswrapper[4922]: I0218 11:56:58.870961 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:56:58 crc kubenswrapper[4922]: W0218 11:56:58.880959 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ad78d97_6a95_4bd9_9204_6f8d0af71cf3.slice/crio-f239b779f8f04023b896a330788159d833d8acf6105f2514edc0ee6188559dff WatchSource:0}: Error finding container f239b779f8f04023b896a330788159d833d8acf6105f2514edc0ee6188559dff: Status 404 returned error can't find the container with id f239b779f8f04023b896a330788159d833d8acf6105f2514edc0ee6188559dff Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.084427 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15f2ab3-202d-4241-a636-4d00475874aa" path="/var/lib/kubelet/pods/d15f2ab3-202d-4241-a636-4d00475874aa/volumes" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.116315 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.158079 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-swift-storage-0\") pod \"c5a7cb59-a6a3-4653-a63a-5942277f6663\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.160196 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-config\") pod \"c5a7cb59-a6a3-4653-a63a-5942277f6663\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.161011 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgkxw\" (UniqueName: \"kubernetes.io/projected/c5a7cb59-a6a3-4653-a63a-5942277f6663-kube-api-access-rgkxw\") pod \"c5a7cb59-a6a3-4653-a63a-5942277f6663\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.161057 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-svc\") pod \"c5a7cb59-a6a3-4653-a63a-5942277f6663\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.161100 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-nb\") pod \"c5a7cb59-a6a3-4653-a63a-5942277f6663\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.161132 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-sb\") pod \"c5a7cb59-a6a3-4653-a63a-5942277f6663\" (UID: \"c5a7cb59-a6a3-4653-a63a-5942277f6663\") " Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.190076 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" event={"ID":"d61bee1b-0ee4-4c97-8d5d-8655406f124c","Type":"ContainerStarted","Data":"ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61"} Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.190142 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" event={"ID":"d61bee1b-0ee4-4c97-8d5d-8655406f124c","Type":"ContainerStarted","Data":"18a7a7ab6ce87177724ccea1ae9a737d84614c82a96458fedc35807aee0f2e52"} Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.196743 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a7cb59-a6a3-4653-a63a-5942277f6663-kube-api-access-rgkxw" (OuterVolumeSpecName: "kube-api-access-rgkxw") pod "c5a7cb59-a6a3-4653-a63a-5942277f6663" (UID: "c5a7cb59-a6a3-4653-a63a-5942277f6663"). InnerVolumeSpecName "kube-api-access-rgkxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.197912 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74a83ecb-de31-4767-a178-bccf8a37e93e","Type":"ContainerStarted","Data":"d5a387d1fd841c5e7a8b16f873f082fef6f77bf7e19af0404c6f60662142d1da"} Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.202244 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5a7cb59-a6a3-4653-a63a-5942277f6663" (UID: "c5a7cb59-a6a3-4653-a63a-5942277f6663"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.208064 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5a7cb59-a6a3-4653-a63a-5942277f6663" (UID: "c5a7cb59-a6a3-4653-a63a-5942277f6663"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.209057 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" event={"ID":"c5a7cb59-a6a3-4653-a63a-5942277f6663","Type":"ContainerDied","Data":"1944f842e23fb615957951883e249b9c3deafa98b8f8ab3f95ea984230f6d29c"} Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.209100 4922 scope.go:117] "RemoveContainer" containerID="828b6208153495b0d3b9a186701d25e980bb083d35eb836c7903452c2dfbbadf" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.209219 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc6887bc5-xfpt6" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.226725 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-config" (OuterVolumeSpecName: "config") pod "c5a7cb59-a6a3-4653-a63a-5942277f6663" (UID: "c5a7cb59-a6a3-4653-a63a-5942277f6663"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.226811 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3","Type":"ContainerStarted","Data":"f239b779f8f04023b896a330788159d833d8acf6105f2514edc0ee6188559dff"} Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.236184 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5a7cb59-a6a3-4653-a63a-5942277f6663" (UID: "c5a7cb59-a6a3-4653-a63a-5942277f6663"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.248835 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5a7cb59-a6a3-4653-a63a-5942277f6663" (UID: "c5a7cb59-a6a3-4653-a63a-5942277f6663"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.265143 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.265177 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.265187 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.265197 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgkxw\" (UniqueName: \"kubernetes.io/projected/c5a7cb59-a6a3-4653-a63a-5942277f6663-kube-api-access-rgkxw\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.265208 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.265219 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5a7cb59-a6a3-4653-a63a-5942277f6663-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.321453 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.362823 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.405622 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.718463 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-xfpt6"] Feb 18 11:56:59 crc kubenswrapper[4922]: I0218 11:56:59.719637 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc6887bc5-xfpt6"] Feb 18 11:57:00 crc kubenswrapper[4922]: I0218 11:57:00.242864 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16d2c710-0adb-4543-8e7c-7e318d2e0091","Type":"ContainerStarted","Data":"a2ee0d21eb104489d672ffe18ba532d76b875730a7c6807acc779f5e1a1423e5"} Feb 18 11:57:00 crc kubenswrapper[4922]: I0218 11:57:00.245561 4922 generic.go:334] "Generic (PLEG): container finished" podID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" containerID="ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61" exitCode=0 Feb 18 11:57:00 crc kubenswrapper[4922]: I0218 11:57:00.245617 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" event={"ID":"d61bee1b-0ee4-4c97-8d5d-8655406f124c","Type":"ContainerDied","Data":"ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61"} Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.003983 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a7cb59-a6a3-4653-a63a-5942277f6663" path="/var/lib/kubelet/pods/c5a7cb59-a6a3-4653-a63a-5942277f6663/volumes" Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.295116 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74a83ecb-de31-4767-a178-bccf8a37e93e","Type":"ContainerStarted","Data":"fc4b0d98b4a0ffc17082b832ce25867774e01fb1582211f0e74f81571c524a12"} Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.300887 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16d2c710-0adb-4543-8e7c-7e318d2e0091","Type":"ContainerStarted","Data":"5384cbf5a529a7ac8c08ebb121d6b3f3cb726547d83ae663fede88079514676b"} Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.306517 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3","Type":"ContainerStarted","Data":"b4ae6c8859c170ed82fdd4029620545643f83a046da270a4665b5e08aebee3d6"} Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.308468 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed1bc0f3-a613-4565-b1f5-e962556acb00","Type":"ContainerStarted","Data":"08eea4a1ca654ec0f46724358fe53403626c28895fe7bd0802a7388b3e60a117"} Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.308659 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerName="cinder-api-log" containerID="cri-o://1c2064f4a7cb5c8c928c435089f458b706b8fa933a0140eb0fd478bbb13155df" gracePeriod=30 Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.308968 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.309007 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerName="cinder-api" containerID="cri-o://08eea4a1ca654ec0f46724358fe53403626c28895fe7bd0802a7388b3e60a117" gracePeriod=30 Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.312217 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" event={"ID":"d61bee1b-0ee4-4c97-8d5d-8655406f124c","Type":"ContainerStarted","Data":"6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674"} Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.312973 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.324992 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.159958047 podStartE2EDuration="8.324976322s" podCreationTimestamp="2026-02-18 11:56:53 +0000 UTC" firstStartedPulling="2026-02-18 11:56:55.230696774 +0000 UTC m=+1216.958400854" lastFinishedPulling="2026-02-18 11:56:56.395715059 +0000 UTC m=+1218.123419129" observedRunningTime="2026-02-18 11:57:01.320963561 +0000 UTC m=+1223.048667641" watchObservedRunningTime="2026-02-18 11:57:01.324976322 +0000 UTC m=+1223.052680402" Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.359409 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.359390012 podStartE2EDuration="8.359390012s" podCreationTimestamp="2026-02-18 11:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:01.34584223 +0000 UTC m=+1223.073546310" watchObservedRunningTime="2026-02-18 11:57:01.359390012 +0000 UTC m=+1223.087094092" Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.383118 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" podStartSLOduration=5.383098512 podStartE2EDuration="5.383098512s" podCreationTimestamp="2026-02-18 11:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:01.380808444 +0000 UTC m=+1223.108512524" watchObservedRunningTime="2026-02-18 11:57:01.383098512 +0000 UTC m=+1223.110802592" Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.483168 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.513701 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:57:01 crc kubenswrapper[4922]: I0218 11:57:01.560187 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.035134 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.363110 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3","Type":"ContainerStarted","Data":"d3c510b21747ecd1b7af150cea8f55ebe831e769ad7c9a3529624a743396fb61"} Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.363283 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerName="glance-log" containerID="cri-o://b4ae6c8859c170ed82fdd4029620545643f83a046da270a4665b5e08aebee3d6" gracePeriod=30 Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.363496 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerName="glance-httpd" containerID="cri-o://d3c510b21747ecd1b7af150cea8f55ebe831e769ad7c9a3529624a743396fb61" gracePeriod=30 Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.380781 4922 generic.go:334] "Generic (PLEG): container finished" podID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerID="08eea4a1ca654ec0f46724358fe53403626c28895fe7bd0802a7388b3e60a117" exitCode=0 Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.380806 4922 generic.go:334] "Generic (PLEG): container finished" podID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerID="1c2064f4a7cb5c8c928c435089f458b706b8fa933a0140eb0fd478bbb13155df" exitCode=143 Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.380849 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed1bc0f3-a613-4565-b1f5-e962556acb00","Type":"ContainerDied","Data":"08eea4a1ca654ec0f46724358fe53403626c28895fe7bd0802a7388b3e60a117"} Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.380874 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed1bc0f3-a613-4565-b1f5-e962556acb00","Type":"ContainerDied","Data":"1c2064f4a7cb5c8c928c435089f458b706b8fa933a0140eb0fd478bbb13155df"} Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.380883 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ed1bc0f3-a613-4565-b1f5-e962556acb00","Type":"ContainerDied","Data":"80ecfe632f1eb5d18cb8c8fa491a890fbee92c07e3b873e227a8dfcd9bb02d67"} Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.380895 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ecfe632f1eb5d18cb8c8fa491a890fbee92c07e3b873e227a8dfcd9bb02d67" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.384640 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerName="glance-log" containerID="cri-o://5384cbf5a529a7ac8c08ebb121d6b3f3cb726547d83ae663fede88079514676b" gracePeriod=30 Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.384904 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16d2c710-0adb-4543-8e7c-7e318d2e0091","Type":"ContainerStarted","Data":"f6099997ca87e1e2e208c851a429f437033093112ae306d8aeb490803c1d2917"} Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.385145 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerName="glance-httpd" containerID="cri-o://f6099997ca87e1e2e208c851a429f437033093112ae306d8aeb490803c1d2917" gracePeriod=30 Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.393854 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.393829986 podStartE2EDuration="6.393829986s" podCreationTimestamp="2026-02-18 11:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:02.385390423 +0000 UTC m=+1224.113094503" watchObservedRunningTime="2026-02-18 11:57:02.393829986 +0000 UTC m=+1224.121534066" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.435937 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.460843 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.46082124 podStartE2EDuration="6.46082124s" podCreationTimestamp="2026-02-18 11:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:02.427345863 +0000 UTC m=+1224.155049953" watchObservedRunningTime="2026-02-18 11:57:02.46082124 +0000 UTC m=+1224.188525320" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.616061 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-combined-ca-bundle\") pod \"ed1bc0f3-a613-4565-b1f5-e962556acb00\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.616444 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tqmb\" (UniqueName: \"kubernetes.io/projected/ed1bc0f3-a613-4565-b1f5-e962556acb00-kube-api-access-2tqmb\") pod \"ed1bc0f3-a613-4565-b1f5-e962556acb00\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.616475 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed1bc0f3-a613-4565-b1f5-e962556acb00-etc-machine-id\") pod \"ed1bc0f3-a613-4565-b1f5-e962556acb00\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.616561 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed1bc0f3-a613-4565-b1f5-e962556acb00-logs\") pod \"ed1bc0f3-a613-4565-b1f5-e962556acb00\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.616641 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data-custom\") pod \"ed1bc0f3-a613-4565-b1f5-e962556acb00\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.616674 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-scripts\") pod \"ed1bc0f3-a613-4565-b1f5-e962556acb00\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.616784 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data\") pod \"ed1bc0f3-a613-4565-b1f5-e962556acb00\" (UID: \"ed1bc0f3-a613-4565-b1f5-e962556acb00\") " Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.618866 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed1bc0f3-a613-4565-b1f5-e962556acb00-logs" (OuterVolumeSpecName: "logs") pod "ed1bc0f3-a613-4565-b1f5-e962556acb00" (UID: "ed1bc0f3-a613-4565-b1f5-e962556acb00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.621708 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed1bc0f3-a613-4565-b1f5-e962556acb00-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ed1bc0f3-a613-4565-b1f5-e962556acb00" (UID: "ed1bc0f3-a613-4565-b1f5-e962556acb00"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.637772 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1bc0f3-a613-4565-b1f5-e962556acb00-kube-api-access-2tqmb" (OuterVolumeSpecName: "kube-api-access-2tqmb") pod "ed1bc0f3-a613-4565-b1f5-e962556acb00" (UID: "ed1bc0f3-a613-4565-b1f5-e962556acb00"). InnerVolumeSpecName "kube-api-access-2tqmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.638428 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-scripts" (OuterVolumeSpecName: "scripts") pod "ed1bc0f3-a613-4565-b1f5-e962556acb00" (UID: "ed1bc0f3-a613-4565-b1f5-e962556acb00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.646540 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ed1bc0f3-a613-4565-b1f5-e962556acb00" (UID: "ed1bc0f3-a613-4565-b1f5-e962556acb00"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.729564 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed1bc0f3-a613-4565-b1f5-e962556acb00" (UID: "ed1bc0f3-a613-4565-b1f5-e962556acb00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.730420 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed1bc0f3-a613-4565-b1f5-e962556acb00-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.730496 4922 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.730511 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.730525 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.730538 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tqmb\" (UniqueName: \"kubernetes.io/projected/ed1bc0f3-a613-4565-b1f5-e962556acb00-kube-api-access-2tqmb\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.730551 4922 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ed1bc0f3-a613-4565-b1f5-e962556acb00-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.796634 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data" (OuterVolumeSpecName: "config-data") pod "ed1bc0f3-a613-4565-b1f5-e962556acb00" (UID: "ed1bc0f3-a613-4565-b1f5-e962556acb00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:02 crc kubenswrapper[4922]: I0218 11:57:02.835716 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed1bc0f3-a613-4565-b1f5-e962556acb00-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.462248 4922 generic.go:334] "Generic (PLEG): container finished" podID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerID="f6099997ca87e1e2e208c851a429f437033093112ae306d8aeb490803c1d2917" exitCode=0 Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.462637 4922 generic.go:334] "Generic (PLEG): container finished" podID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerID="5384cbf5a529a7ac8c08ebb121d6b3f3cb726547d83ae663fede88079514676b" exitCode=143 Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.462496 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16d2c710-0adb-4543-8e7c-7e318d2e0091","Type":"ContainerDied","Data":"f6099997ca87e1e2e208c851a429f437033093112ae306d8aeb490803c1d2917"} Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.462750 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16d2c710-0adb-4543-8e7c-7e318d2e0091","Type":"ContainerDied","Data":"5384cbf5a529a7ac8c08ebb121d6b3f3cb726547d83ae663fede88079514676b"} Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.478984 4922 generic.go:334] "Generic (PLEG): container finished" podID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerID="b4ae6c8859c170ed82fdd4029620545643f83a046da270a4665b5e08aebee3d6" exitCode=143 Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.479139 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.480338 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3","Type":"ContainerDied","Data":"b4ae6c8859c170ed82fdd4029620545643f83a046da270a4665b5e08aebee3d6"} Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.519221 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.554606 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.602448 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:57:03 crc kubenswrapper[4922]: E0218 11:57:03.603093 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerName="cinder-api-log" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.603113 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerName="cinder-api-log" Feb 18 11:57:03 crc kubenswrapper[4922]: E0218 11:57:03.603139 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a7cb59-a6a3-4653-a63a-5942277f6663" containerName="init" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.603162 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a7cb59-a6a3-4653-a63a-5942277f6663" containerName="init" Feb 18 11:57:03 crc kubenswrapper[4922]: E0218 11:57:03.603201 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerName="cinder-api" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.603211 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerName="cinder-api" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.603511 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a7cb59-a6a3-4653-a63a-5942277f6663" containerName="init" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.603539 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerName="cinder-api-log" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.603549 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" containerName="cinder-api" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.605039 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.615777 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.616075 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.638099 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.718461 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766589 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-config-data\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766672 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gqn\" (UniqueName: \"kubernetes.io/projected/b897159b-9178-4f59-b254-08229460867d-kube-api-access-v4gqn\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766713 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b897159b-9178-4f59-b254-08229460867d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766729 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-scripts\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766790 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766823 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b897159b-9178-4f59-b254-08229460867d-logs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766857 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766882 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.766898 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.868799 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.869801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b897159b-9178-4f59-b254-08229460867d-logs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.872314 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.870656 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b897159b-9178-4f59-b254-08229460867d-logs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.873124 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.873561 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.874047 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-config-data\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.874172 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4gqn\" (UniqueName: \"kubernetes.io/projected/b897159b-9178-4f59-b254-08229460867d-kube-api-access-v4gqn\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.874299 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b897159b-9178-4f59-b254-08229460867d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.874403 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-scripts\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.874883 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b897159b-9178-4f59-b254-08229460867d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.878402 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.879037 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.890974 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-scripts\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.891339 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.892280 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-config-data\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.907003 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4gqn\" (UniqueName: \"kubernetes.io/projected/b897159b-9178-4f59-b254-08229460867d-kube-api-access-v4gqn\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:03 crc kubenswrapper[4922]: I0218 11:57:03.916299 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b897159b-9178-4f59-b254-08229460867d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b897159b-9178-4f59-b254-08229460867d\") " pod="openstack/cinder-api-0" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.028917 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.140634 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6bb9876df9-jt7kg"] Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.142674 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.146924 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.147356 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.147607 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.153424 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bb9876df9-jt7kg"] Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.266730 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.278122 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.175:8080/\": dial tcp 10.217.0.175:8080: connect: connection refused" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.296189 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tp5k\" (UniqueName: \"kubernetes.io/projected/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-kube-api-access-5tp5k\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.296762 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-etc-swift\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.296794 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-log-httpd\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.296820 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-internal-tls-certs\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.296843 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-config-data\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.296940 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-combined-ca-bundle\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.296989 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-public-tls-certs\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.297037 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-run-httpd\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.347636 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.398806 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-combined-ca-bundle\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.398897 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-public-tls-certs\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.398952 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-run-httpd\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.398980 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tp5k\" (UniqueName: \"kubernetes.io/projected/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-kube-api-access-5tp5k\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.399022 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-etc-swift\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.399041 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-log-httpd\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.399062 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-internal-tls-certs\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.399077 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-config-data\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.401669 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-run-httpd\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.405060 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-log-httpd\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.414741 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-combined-ca-bundle\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.417899 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-public-tls-certs\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.419881 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-config-data\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.420561 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tp5k\" (UniqueName: \"kubernetes.io/projected/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-kube-api-access-5tp5k\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.422835 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-internal-tls-certs\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.424519 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8cc5cf6d-c722-42a3-8389-b991e77d1bbf-etc-swift\") pod \"swift-proxy-6bb9876df9-jt7kg\" (UID: \"8cc5cf6d-c722-42a3-8389-b991e77d1bbf\") " pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.504083 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"16d2c710-0adb-4543-8e7c-7e318d2e0091\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.505245 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-scripts\") pod \"16d2c710-0adb-4543-8e7c-7e318d2e0091\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.505400 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2trdb\" (UniqueName: \"kubernetes.io/projected/16d2c710-0adb-4543-8e7c-7e318d2e0091-kube-api-access-2trdb\") pod \"16d2c710-0adb-4543-8e7c-7e318d2e0091\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.505543 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-config-data\") pod \"16d2c710-0adb-4543-8e7c-7e318d2e0091\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.505775 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-logs\") pod \"16d2c710-0adb-4543-8e7c-7e318d2e0091\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.505925 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-httpd-run\") pod \"16d2c710-0adb-4543-8e7c-7e318d2e0091\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.506047 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-combined-ca-bundle\") pod \"16d2c710-0adb-4543-8e7c-7e318d2e0091\" (UID: \"16d2c710-0adb-4543-8e7c-7e318d2e0091\") " Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.515773 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d2c710-0adb-4543-8e7c-7e318d2e0091-kube-api-access-2trdb" (OuterVolumeSpecName: "kube-api-access-2trdb") pod "16d2c710-0adb-4543-8e7c-7e318d2e0091" (UID: "16d2c710-0adb-4543-8e7c-7e318d2e0091"). InnerVolumeSpecName "kube-api-access-2trdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.528616 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "16d2c710-0adb-4543-8e7c-7e318d2e0091" (UID: "16d2c710-0adb-4543-8e7c-7e318d2e0091"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.540564 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"16d2c710-0adb-4543-8e7c-7e318d2e0091","Type":"ContainerDied","Data":"a2ee0d21eb104489d672ffe18ba532d76b875730a7c6807acc779f5e1a1423e5"} Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.540625 4922 scope.go:117] "RemoveContainer" containerID="f6099997ca87e1e2e208c851a429f437033093112ae306d8aeb490803c1d2917" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.540784 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.564535 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-scripts" (OuterVolumeSpecName: "scripts") pod "16d2c710-0adb-4543-8e7c-7e318d2e0091" (UID: "16d2c710-0adb-4543-8e7c-7e318d2e0091"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.564697 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-logs" (OuterVolumeSpecName: "logs") pod "16d2c710-0adb-4543-8e7c-7e318d2e0091" (UID: "16d2c710-0adb-4543-8e7c-7e318d2e0091"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.564724 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "16d2c710-0adb-4543-8e7c-7e318d2e0091" (UID: "16d2c710-0adb-4543-8e7c-7e318d2e0091"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.574928 4922 generic.go:334] "Generic (PLEG): container finished" podID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerID="d3c510b21747ecd1b7af150cea8f55ebe831e769ad7c9a3529624a743396fb61" exitCode=0 Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.574977 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3","Type":"ContainerDied","Data":"d3c510b21747ecd1b7af150cea8f55ebe831e769ad7c9a3529624a743396fb61"} Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.590705 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16d2c710-0adb-4543-8e7c-7e318d2e0091" (UID: "16d2c710-0adb-4543-8e7c-7e318d2e0091"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.613653 4922 scope.go:117] "RemoveContainer" containerID="5384cbf5a529a7ac8c08ebb121d6b3f3cb726547d83ae663fede88079514676b" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.615040 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.615074 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.615084 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2trdb\" (UniqueName: \"kubernetes.io/projected/16d2c710-0adb-4543-8e7c-7e318d2e0091-kube-api-access-2trdb\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.615096 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.615105 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16d2c710-0adb-4543-8e7c-7e318d2e0091-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.615112 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.643716 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-config-data" (OuterVolumeSpecName: "config-data") pod "16d2c710-0adb-4543-8e7c-7e318d2e0091" (UID: "16d2c710-0adb-4543-8e7c-7e318d2e0091"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.654218 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.667851 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.717452 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.717507 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16d2c710-0adb-4543-8e7c-7e318d2e0091-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.743089 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 11:57:04 crc kubenswrapper[4922]: W0218 11:57:04.760566 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb897159b_9178_4f59_b254_08229460867d.slice/crio-66ec14c893f5abca1bc83a4e927703007a05fa08b70365f0103fc74dc81ffc65 WatchSource:0}: Error finding container 66ec14c893f5abca1bc83a4e927703007a05fa08b70365f0103fc74dc81ffc65: Status 404 returned error can't find the container with id 66ec14c893f5abca1bc83a4e927703007a05fa08b70365f0103fc74dc81ffc65 Feb 18 11:57:04 crc kubenswrapper[4922]: I0218 11:57:04.850728 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.030546 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1bc0f3-a613-4565-b1f5-e962556acb00" path="/var/lib/kubelet/pods/ed1bc0f3-a613-4565-b1f5-e962556acb00/volumes" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.031934 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.039484 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.078217 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:05 crc kubenswrapper[4922]: E0218 11:57:05.078785 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerName="glance-log" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.078804 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerName="glance-log" Feb 18 11:57:05 crc kubenswrapper[4922]: E0218 11:57:05.078823 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerName="glance-httpd" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.079911 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerName="glance-httpd" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.081078 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerName="glance-httpd" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.081121 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" containerName="glance-log" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.083648 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.085861 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.086216 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.095061 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.142956 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-794d859fd8-fbbnx" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.238837 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d677498bd-cxq98"] Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.239177 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" containerID="cri-o://95063da6319615da44d0b1f12dd9abb6a109354894bb0cb11f9c25d007446b4c" gracePeriod=30 Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.239843 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api" containerID="cri-o://d307cc88ce59602f69621bcf0371da0ee28a8272ec22fc60147af239fd78badf" gracePeriod=30 Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.250910 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": EOF" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.252717 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": EOF" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253311 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253458 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253489 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253527 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-logs\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253660 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253714 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253755 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253801 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcrkn\" (UniqueName: \"kubernetes.io/projected/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-kube-api-access-qcrkn\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.253893 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": EOF" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.353509 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bb9876df9-jt7kg"] Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.355968 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-logs\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.356120 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.356182 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.356209 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.356256 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcrkn\" (UniqueName: \"kubernetes.io/projected/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-kube-api-access-qcrkn\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.356339 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.356412 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.356440 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.358281 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.358728 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-logs\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.360388 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.370590 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.372538 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.375238 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.389996 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.402023 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcrkn\" (UniqueName: \"kubernetes.io/projected/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-kube-api-access-qcrkn\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.428558 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.435263 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.639350 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.640719 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="ceilometer-central-agent" containerID="cri-o://ba51c32817af83f75090d7c0f3e6b11f618358a906fe3fa4abfa17b358890f4c" gracePeriod=30 Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.641034 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="proxy-httpd" containerID="cri-o://84badbb54b04871d4102bc412504ae7f5911b23870f7e7ded6df793d7986d02b" gracePeriod=30 Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.641307 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="ceilometer-notification-agent" containerID="cri-o://71670bed8cf5414a16d8e3139affef7f174a018a3fa1f96f685382e7b236b20a" gracePeriod=30 Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.641424 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="sg-core" containerID="cri-o://da256f656dc5f88aa583a65920c90cb68f469934b1ee4eea56bd44df48dcdf88" gracePeriod=30 Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.730305 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b897159b-9178-4f59-b254-08229460867d","Type":"ContainerStarted","Data":"66ec14c893f5abca1bc83a4e927703007a05fa08b70365f0103fc74dc81ffc65"} Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.736177 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.741344 4922 generic.go:334] "Generic (PLEG): container finished" podID="53371b07-a65f-4fec-8564-bcd51df6c010" containerID="95063da6319615da44d0b1f12dd9abb6a109354894bb0cb11f9c25d007446b4c" exitCode=143 Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.741446 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d677498bd-cxq98" event={"ID":"53371b07-a65f-4fec-8564-bcd51df6c010","Type":"ContainerDied","Data":"95063da6319615da44d0b1f12dd9abb6a109354894bb0cb11f9c25d007446b4c"} Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.753706 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bb9876df9-jt7kg" event={"ID":"8cc5cf6d-c722-42a3-8389-b991e77d1bbf","Type":"ContainerStarted","Data":"ef101f250be994a7814b871182058c04992ceb0141e2aa6a68e3a91fa188bb6b"} Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.810214 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-combined-ca-bundle\") pod \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.810272 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbhgh\" (UniqueName: \"kubernetes.io/projected/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-kube-api-access-zbhgh\") pod \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.810329 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-scripts\") pod \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.810488 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-httpd-run\") pod \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.810576 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-logs\") pod \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.810604 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.810628 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-config-data\") pod \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\" (UID: \"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3\") " Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.813569 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-logs" (OuterVolumeSpecName: "logs") pod "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" (UID: "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.814508 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" (UID: "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.820077 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-kube-api-access-zbhgh" (OuterVolumeSpecName: "kube-api-access-zbhgh") pod "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" (UID: "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3"). InnerVolumeSpecName "kube-api-access-zbhgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.828545 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-scripts" (OuterVolumeSpecName: "scripts") pod "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" (UID: "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.829865 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" (UID: "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.867388 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" (UID: "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.914827 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.915790 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.915816 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.915832 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbhgh\" (UniqueName: \"kubernetes.io/projected/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-kube-api-access-zbhgh\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.915844 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.915854 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:05 crc kubenswrapper[4922]: I0218 11:57:05.994279 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.015562 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-config-data" (OuterVolumeSpecName: "config-data") pod "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" (UID: "6ad78d97-6a95-4bd9-9204-6f8d0af71cf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.018647 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.018688 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.213143 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.794041 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24eb828d-acb3-4b88-96dc-8d3bb8c49e86","Type":"ContainerStarted","Data":"205155ab7a59a38e41604dd6477d8795045c94879e29fefe3e7383b4bc42a275"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.811183 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b897159b-9178-4f59-b254-08229460867d","Type":"ContainerStarted","Data":"5b9dff47acb202a13f5efb12c57ae21c1a29a7d2f410f0ba676d77fbf9a6c0ef"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.826232 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ad78d97-6a95-4bd9-9204-6f8d0af71cf3","Type":"ContainerDied","Data":"f239b779f8f04023b896a330788159d833d8acf6105f2514edc0ee6188559dff"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.826258 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.826399 4922 scope.go:117] "RemoveContainer" containerID="d3c510b21747ecd1b7af150cea8f55ebe831e769ad7c9a3529624a743396fb61" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.833623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bb9876df9-jt7kg" event={"ID":"8cc5cf6d-c722-42a3-8389-b991e77d1bbf","Type":"ContainerStarted","Data":"66dc8164db89d83a5273d7b5c890920638da439ac802822d3c7ad93d40a2b796"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.833668 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bb9876df9-jt7kg" event={"ID":"8cc5cf6d-c722-42a3-8389-b991e77d1bbf","Type":"ContainerStarted","Data":"7c235e5e03c0add02f18ecc6178afdc6693e386cabfb339de7a43ee043e84f2a"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.834943 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.834978 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.850829 4922 generic.go:334] "Generic (PLEG): container finished" podID="31a682bb-b881-47f5-960b-c6ae54c24275" containerID="84badbb54b04871d4102bc412504ae7f5911b23870f7e7ded6df793d7986d02b" exitCode=0 Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.850852 4922 generic.go:334] "Generic (PLEG): container finished" podID="31a682bb-b881-47f5-960b-c6ae54c24275" containerID="da256f656dc5f88aa583a65920c90cb68f469934b1ee4eea56bd44df48dcdf88" exitCode=2 Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.850860 4922 generic.go:334] "Generic (PLEG): container finished" podID="31a682bb-b881-47f5-960b-c6ae54c24275" containerID="71670bed8cf5414a16d8e3139affef7f174a018a3fa1f96f685382e7b236b20a" exitCode=0 Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.850867 4922 generic.go:334] "Generic (PLEG): container finished" podID="31a682bb-b881-47f5-960b-c6ae54c24275" containerID="ba51c32817af83f75090d7c0f3e6b11f618358a906fe3fa4abfa17b358890f4c" exitCode=0 Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.850887 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerDied","Data":"84badbb54b04871d4102bc412504ae7f5911b23870f7e7ded6df793d7986d02b"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.850911 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerDied","Data":"da256f656dc5f88aa583a65920c90cb68f469934b1ee4eea56bd44df48dcdf88"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.850921 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerDied","Data":"71670bed8cf5414a16d8e3139affef7f174a018a3fa1f96f685382e7b236b20a"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.850929 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerDied","Data":"ba51c32817af83f75090d7c0f3e6b11f618358a906fe3fa4abfa17b358890f4c"} Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.863860 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6bb9876df9-jt7kg" podStartSLOduration=2.863842079 podStartE2EDuration="2.863842079s" podCreationTimestamp="2026-02-18 11:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:06.855043177 +0000 UTC m=+1228.582747257" watchObservedRunningTime="2026-02-18 11:57:06.863842079 +0000 UTC m=+1228.591546159" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.866434 4922 scope.go:117] "RemoveContainer" containerID="b4ae6c8859c170ed82fdd4029620545643f83a046da270a4665b5e08aebee3d6" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.890302 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.911650 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.924923 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:06 crc kubenswrapper[4922]: E0218 11:57:06.925537 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerName="glance-log" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.925564 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerName="glance-log" Feb 18 11:57:06 crc kubenswrapper[4922]: E0218 11:57:06.925635 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerName="glance-httpd" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.925647 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerName="glance-httpd" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.925876 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerName="glance-httpd" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.925917 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" containerName="glance-log" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.927193 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.937862 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.938182 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 11:57:06 crc kubenswrapper[4922]: I0218 11:57:06.945278 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.035803 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d2c710-0adb-4543-8e7c-7e318d2e0091" path="/var/lib/kubelet/pods/16d2c710-0adb-4543-8e7c-7e318d2e0091/volumes" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.045257 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad78d97-6a95-4bd9-9204-6f8d0af71cf3" path="/var/lib/kubelet/pods/6ad78d97-6a95-4bd9-9204-6f8d0af71cf3/volumes" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.055629 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-logs\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.055750 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-config-data\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.055806 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp7nr\" (UniqueName: \"kubernetes.io/projected/7810aaca-e072-467b-bba7-6a3e12310c68-kube-api-access-zp7nr\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.055834 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.055873 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.055904 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-scripts\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.056004 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.056038 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.176846 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-config-data\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.176968 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp7nr\" (UniqueName: \"kubernetes.io/projected/7810aaca-e072-467b-bba7-6a3e12310c68-kube-api-access-zp7nr\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.177005 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.177065 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.177100 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-scripts\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.177286 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.177358 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.177552 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-logs\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.178091 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-logs\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.182285 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.184097 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-scripts\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.184504 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.185658 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-config-data\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.185737 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.207890 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.214690 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp7nr\" (UniqueName: \"kubernetes.io/projected/7810aaca-e072-467b-bba7-6a3e12310c68-kube-api-access-zp7nr\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.239723 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.273520 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.379428 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2ktqs"] Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.379700 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" podUID="24bbb94b-821e-4c8c-ae27-356f296903bf" containerName="dnsmasq-dns" containerID="cri-o://4fbf873b475eb4db471744da3b986195e954cf285ed89e854f16fe22b1d17ca4" gracePeriod=10 Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.437133 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.441553 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.486081 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-sg-core-conf-yaml\") pod \"31a682bb-b881-47f5-960b-c6ae54c24275\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.486168 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-log-httpd\") pod \"31a682bb-b881-47f5-960b-c6ae54c24275\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.486192 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-scripts\") pod \"31a682bb-b881-47f5-960b-c6ae54c24275\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.486226 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt6qk\" (UniqueName: \"kubernetes.io/projected/31a682bb-b881-47f5-960b-c6ae54c24275-kube-api-access-tt6qk\") pod \"31a682bb-b881-47f5-960b-c6ae54c24275\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.486324 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-combined-ca-bundle\") pod \"31a682bb-b881-47f5-960b-c6ae54c24275\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.486458 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-config-data\") pod \"31a682bb-b881-47f5-960b-c6ae54c24275\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.486565 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-run-httpd\") pod \"31a682bb-b881-47f5-960b-c6ae54c24275\" (UID: \"31a682bb-b881-47f5-960b-c6ae54c24275\") " Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.487457 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "31a682bb-b881-47f5-960b-c6ae54c24275" (UID: "31a682bb-b881-47f5-960b-c6ae54c24275"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.491703 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "31a682bb-b881-47f5-960b-c6ae54c24275" (UID: "31a682bb-b881-47f5-960b-c6ae54c24275"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.499615 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-scripts" (OuterVolumeSpecName: "scripts") pod "31a682bb-b881-47f5-960b-c6ae54c24275" (UID: "31a682bb-b881-47f5-960b-c6ae54c24275"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.511695 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a682bb-b881-47f5-960b-c6ae54c24275-kube-api-access-tt6qk" (OuterVolumeSpecName: "kube-api-access-tt6qk") pod "31a682bb-b881-47f5-960b-c6ae54c24275" (UID: "31a682bb-b881-47f5-960b-c6ae54c24275"). InnerVolumeSpecName "kube-api-access-tt6qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.586076 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "31a682bb-b881-47f5-960b-c6ae54c24275" (UID: "31a682bb-b881-47f5-960b-c6ae54c24275"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.599247 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.599280 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.599289 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/31a682bb-b881-47f5-960b-c6ae54c24275-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.599297 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.599305 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt6qk\" (UniqueName: \"kubernetes.io/projected/31a682bb-b881-47f5-960b-c6ae54c24275-kube-api-access-tt6qk\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.789856 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31a682bb-b881-47f5-960b-c6ae54c24275" (UID: "31a682bb-b881-47f5-960b-c6ae54c24275"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.795564 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-config-data" (OuterVolumeSpecName: "config-data") pod "31a682bb-b881-47f5-960b-c6ae54c24275" (UID: "31a682bb-b881-47f5-960b-c6ae54c24275"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.806195 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.806237 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31a682bb-b881-47f5-960b-c6ae54c24275-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.884627 4922 generic.go:334] "Generic (PLEG): container finished" podID="24bbb94b-821e-4c8c-ae27-356f296903bf" containerID="4fbf873b475eb4db471744da3b986195e954cf285ed89e854f16fe22b1d17ca4" exitCode=0 Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.885061 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" event={"ID":"24bbb94b-821e-4c8c-ae27-356f296903bf","Type":"ContainerDied","Data":"4fbf873b475eb4db471744da3b986195e954cf285ed89e854f16fe22b1d17ca4"} Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.898344 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"31a682bb-b881-47f5-960b-c6ae54c24275","Type":"ContainerDied","Data":"2f2fa694e60fe2de69033e6edac945c54110ee47bb40b70b10dec8b4f330dcd3"} Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.898429 4922 scope.go:117] "RemoveContainer" containerID="84badbb54b04871d4102bc412504ae7f5911b23870f7e7ded6df793d7986d02b" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.898665 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.909990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24eb828d-acb3-4b88-96dc-8d3bb8c49e86","Type":"ContainerStarted","Data":"2c9d001f560c7f93208a3584bc230854257ebcad49ef86ea326e6025425c596a"} Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.920133 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b897159b-9178-4f59-b254-08229460867d","Type":"ContainerStarted","Data":"b1aff12a59b6181c85a4b4e0917df836f66c6b7e79213ee94f04444ae389d050"} Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.920712 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 11:57:07 crc kubenswrapper[4922]: I0218 11:57:07.971440 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.971414411 podStartE2EDuration="4.971414411s" podCreationTimestamp="2026-02-18 11:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:07.957730815 +0000 UTC m=+1229.685434895" watchObservedRunningTime="2026-02-18 11:57:07.971414411 +0000 UTC m=+1229.699118501" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.069874 4922 scope.go:117] "RemoveContainer" containerID="da256f656dc5f88aa583a65920c90cb68f469934b1ee4eea56bd44df48dcdf88" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.097199 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.113316 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.124549 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.187609 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:08 crc kubenswrapper[4922]: E0218 11:57:08.188183 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="sg-core" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188211 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="sg-core" Feb 18 11:57:08 crc kubenswrapper[4922]: E0218 11:57:08.188236 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bbb94b-821e-4c8c-ae27-356f296903bf" containerName="dnsmasq-dns" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188242 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bbb94b-821e-4c8c-ae27-356f296903bf" containerName="dnsmasq-dns" Feb 18 11:57:08 crc kubenswrapper[4922]: E0218 11:57:08.188254 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bbb94b-821e-4c8c-ae27-356f296903bf" containerName="init" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188260 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bbb94b-821e-4c8c-ae27-356f296903bf" containerName="init" Feb 18 11:57:08 crc kubenswrapper[4922]: E0218 11:57:08.188278 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="ceilometer-central-agent" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188285 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="ceilometer-central-agent" Feb 18 11:57:08 crc kubenswrapper[4922]: E0218 11:57:08.188310 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="proxy-httpd" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188316 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="proxy-httpd" Feb 18 11:57:08 crc kubenswrapper[4922]: E0218 11:57:08.188326 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="ceilometer-notification-agent" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188332 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="ceilometer-notification-agent" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188518 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="sg-core" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188533 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="ceilometer-notification-agent" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188544 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="24bbb94b-821e-4c8c-ae27-356f296903bf" containerName="dnsmasq-dns" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188553 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="proxy-httpd" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.188564 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" containerName="ceilometer-central-agent" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.193239 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.204553 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.204901 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.208057 4922 scope.go:117] "RemoveContainer" containerID="71670bed8cf5414a16d8e3139affef7f174a018a3fa1f96f685382e7b236b20a" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.214843 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.223016 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8kkt\" (UniqueName: \"kubernetes.io/projected/24bbb94b-821e-4c8c-ae27-356f296903bf-kube-api-access-x8kkt\") pod \"24bbb94b-821e-4c8c-ae27-356f296903bf\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.223073 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-swift-storage-0\") pod \"24bbb94b-821e-4c8c-ae27-356f296903bf\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.223092 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-nb\") pod \"24bbb94b-821e-4c8c-ae27-356f296903bf\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.223545 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-sb\") pod \"24bbb94b-821e-4c8c-ae27-356f296903bf\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.223566 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-config\") pod \"24bbb94b-821e-4c8c-ae27-356f296903bf\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.223685 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-svc\") pod \"24bbb94b-821e-4c8c-ae27-356f296903bf\" (UID: \"24bbb94b-821e-4c8c-ae27-356f296903bf\") " Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.246345 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24bbb94b-821e-4c8c-ae27-356f296903bf-kube-api-access-x8kkt" (OuterVolumeSpecName: "kube-api-access-x8kkt") pod "24bbb94b-821e-4c8c-ae27-356f296903bf" (UID: "24bbb94b-821e-4c8c-ae27-356f296903bf"). InnerVolumeSpecName "kube-api-access-x8kkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.320305 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "24bbb94b-821e-4c8c-ae27-356f296903bf" (UID: "24bbb94b-821e-4c8c-ae27-356f296903bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325277 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-log-httpd\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325378 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-scripts\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325444 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-config-data\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325483 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325503 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqwwp\" (UniqueName: \"kubernetes.io/projected/72a01906-824d-4581-8d88-7d40a91786a1-kube-api-access-sqwwp\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325549 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-run-httpd\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325573 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325653 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8kkt\" (UniqueName: \"kubernetes.io/projected/24bbb94b-821e-4c8c-ae27-356f296903bf-kube-api-access-x8kkt\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.325670 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.346164 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.392916 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9d79df67b-mg9kq" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.393048 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.425423 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24bbb94b-821e-4c8c-ae27-356f296903bf" (UID: "24bbb94b-821e-4c8c-ae27-356f296903bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.427715 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-scripts\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.427809 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-config-data\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.427857 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.427875 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqwwp\" (UniqueName: \"kubernetes.io/projected/72a01906-824d-4581-8d88-7d40a91786a1-kube-api-access-sqwwp\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.427922 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-run-httpd\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.427945 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.427997 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-log-httpd\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.428045 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.428580 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-log-httpd\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.430794 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-run-httpd\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.441616 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-scripts\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.446501 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.473006 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-config-data\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.473198 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqwwp\" (UniqueName: \"kubernetes.io/projected/72a01906-824d-4581-8d88-7d40a91786a1-kube-api-access-sqwwp\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.479401 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.497294 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24bbb94b-821e-4c8c-ae27-356f296903bf" (UID: "24bbb94b-821e-4c8c-ae27-356f296903bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.531773 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.596089 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24bbb94b-821e-4c8c-ae27-356f296903bf" (UID: "24bbb94b-821e-4c8c-ae27-356f296903bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.622170 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.623640 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-config" (OuterVolumeSpecName: "config") pod "24bbb94b-821e-4c8c-ae27-356f296903bf" (UID: "24bbb94b-821e-4c8c-ae27-356f296903bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.633320 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.633355 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24bbb94b-821e-4c8c-ae27-356f296903bf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.699639 4922 scope.go:117] "RemoveContainer" containerID="ba51c32817af83f75090d7c0f3e6b11f618358a906fe3fa4abfa17b358890f4c" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.947483 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.947490 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-2ktqs" event={"ID":"24bbb94b-821e-4c8c-ae27-356f296903bf","Type":"ContainerDied","Data":"d52ba93a2e395a5fe3e842c514b38b9be3486416e78d8b85381f805174f39d21"} Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.947958 4922 scope.go:117] "RemoveContainer" containerID="4fbf873b475eb4db471744da3b986195e954cf285ed89e854f16fe22b1d17ca4" Feb 18 11:57:08 crc kubenswrapper[4922]: I0218 11:57:08.962883 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24eb828d-acb3-4b88-96dc-8d3bb8c49e86","Type":"ContainerStarted","Data":"fae0743e5e0fb73717e68120a40713dc2bcdd3fa357699afed1bff0f5a1368e5"} Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.003483 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.003459664 podStartE2EDuration="4.003459664s" podCreationTimestamp="2026-02-18 11:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:08.993741418 +0000 UTC m=+1230.721445498" watchObservedRunningTime="2026-02-18 11:57:09.003459664 +0000 UTC m=+1230.731163744" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.010569 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a682bb-b881-47f5-960b-c6ae54c24275" path="/var/lib/kubelet/pods/31a682bb-b881-47f5-960b-c6ae54c24275/volumes" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.011494 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7810aaca-e072-467b-bba7-6a3e12310c68","Type":"ContainerStarted","Data":"9937aae78bceb48bd4f47887b4b7c1fa9f743a0bf2b9a03c23a054415125619f"} Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.025146 4922 scope.go:117] "RemoveContainer" containerID="7f1abe52f752c943d157e484e47d8d790a98ef3e4904432fd3c21618fac5c1e6" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.104694 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2ktqs"] Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.126958 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-2ktqs"] Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.337591 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.704556 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:49268->10.217.0.171:9311: read: connection reset by peer" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.704632 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:38238->10.217.0.171:9311: read: connection reset by peer" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.706669 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d677498bd-cxq98" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": dial tcp 10.217.0.171:9311: connect: connection refused" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.706814 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.808074 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.809070 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.893373 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 11:57:09 crc kubenswrapper[4922]: I0218 11:57:09.975928 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.001606 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7810aaca-e072-467b-bba7-6a3e12310c68","Type":"ContainerStarted","Data":"0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18"} Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.002663 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerStarted","Data":"e51f7f2faa63d11b52bb16edb526931063add3c924782fc45c0056ce678908a1"} Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.004407 4922 generic.go:334] "Generic (PLEG): container finished" podID="53371b07-a65f-4fec-8564-bcd51df6c010" containerID="d307cc88ce59602f69621bcf0371da0ee28a8272ec22fc60147af239fd78badf" exitCode=0 Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.004578 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="cinder-scheduler" containerID="cri-o://d5a387d1fd841c5e7a8b16f873f082fef6f77bf7e19af0404c6f60662142d1da" gracePeriod=30 Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.004799 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d677498bd-cxq98" event={"ID":"53371b07-a65f-4fec-8564-bcd51df6c010","Type":"ContainerDied","Data":"d307cc88ce59602f69621bcf0371da0ee28a8272ec22fc60147af239fd78badf"} Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.005856 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="probe" containerID="cri-o://fc4b0d98b4a0ffc17082b832ce25867774e01fb1582211f0e74f81571c524a12" gracePeriod=30 Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.330376 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.343826 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.485049 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-combined-ca-bundle\") pod \"53371b07-a65f-4fec-8564-bcd51df6c010\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.485447 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53371b07-a65f-4fec-8564-bcd51df6c010-logs\") pod \"53371b07-a65f-4fec-8564-bcd51df6c010\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.485470 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data\") pod \"53371b07-a65f-4fec-8564-bcd51df6c010\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.485524 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data-custom\") pod \"53371b07-a65f-4fec-8564-bcd51df6c010\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.485547 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdmzm\" (UniqueName: \"kubernetes.io/projected/53371b07-a65f-4fec-8564-bcd51df6c010-kube-api-access-jdmzm\") pod \"53371b07-a65f-4fec-8564-bcd51df6c010\" (UID: \"53371b07-a65f-4fec-8564-bcd51df6c010\") " Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.485914 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53371b07-a65f-4fec-8564-bcd51df6c010-logs" (OuterVolumeSpecName: "logs") pod "53371b07-a65f-4fec-8564-bcd51df6c010" (UID: "53371b07-a65f-4fec-8564-bcd51df6c010"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.486132 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53371b07-a65f-4fec-8564-bcd51df6c010-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.491648 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "53371b07-a65f-4fec-8564-bcd51df6c010" (UID: "53371b07-a65f-4fec-8564-bcd51df6c010"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.506835 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53371b07-a65f-4fec-8564-bcd51df6c010-kube-api-access-jdmzm" (OuterVolumeSpecName: "kube-api-access-jdmzm") pod "53371b07-a65f-4fec-8564-bcd51df6c010" (UID: "53371b07-a65f-4fec-8564-bcd51df6c010"). InnerVolumeSpecName "kube-api-access-jdmzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.530774 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53371b07-a65f-4fec-8564-bcd51df6c010" (UID: "53371b07-a65f-4fec-8564-bcd51df6c010"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.588546 4922 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.588581 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdmzm\" (UniqueName: \"kubernetes.io/projected/53371b07-a65f-4fec-8564-bcd51df6c010-kube-api-access-jdmzm\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.588596 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.595659 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data" (OuterVolumeSpecName: "config-data") pod "53371b07-a65f-4fec-8564-bcd51df6c010" (UID: "53371b07-a65f-4fec-8564-bcd51df6c010"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.690501 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53371b07-a65f-4fec-8564-bcd51df6c010-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:10 crc kubenswrapper[4922]: I0218 11:57:10.988902 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24bbb94b-821e-4c8c-ae27-356f296903bf" path="/var/lib/kubelet/pods/24bbb94b-821e-4c8c-ae27-356f296903bf/volumes" Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.017001 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d677498bd-cxq98" event={"ID":"53371b07-a65f-4fec-8564-bcd51df6c010","Type":"ContainerDied","Data":"6019911b846afdb67dccdb7e03471f7cbf11c9aa081bfb07a432273f6bc2c54b"} Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.017048 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d677498bd-cxq98" Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.017095 4922 scope.go:117] "RemoveContainer" containerID="d307cc88ce59602f69621bcf0371da0ee28a8272ec22fc60147af239fd78badf" Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.024468 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7810aaca-e072-467b-bba7-6a3e12310c68","Type":"ContainerStarted","Data":"be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4"} Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.028265 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerStarted","Data":"f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9"} Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.058348 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.058281606 podStartE2EDuration="5.058281606s" podCreationTimestamp="2026-02-18 11:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:11.048618072 +0000 UTC m=+1232.776322172" watchObservedRunningTime="2026-02-18 11:57:11.058281606 +0000 UTC m=+1232.785985686" Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.091977 4922 scope.go:117] "RemoveContainer" containerID="95063da6319615da44d0b1f12dd9abb6a109354894bb0cb11f9c25d007446b4c" Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.095998 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d677498bd-cxq98"] Feb 18 11:57:11 crc kubenswrapper[4922]: I0218 11:57:11.107417 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6d677498bd-cxq98"] Feb 18 11:57:12 crc kubenswrapper[4922]: I0218 11:57:12.039547 4922 generic.go:334] "Generic (PLEG): container finished" podID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerID="fc4b0d98b4a0ffc17082b832ce25867774e01fb1582211f0e74f81571c524a12" exitCode=0 Feb 18 11:57:12 crc kubenswrapper[4922]: I0218 11:57:12.039606 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74a83ecb-de31-4767-a178-bccf8a37e93e","Type":"ContainerDied","Data":"fc4b0d98b4a0ffc17082b832ce25867774e01fb1582211f0e74f81571c524a12"} Feb 18 11:57:12 crc kubenswrapper[4922]: I0218 11:57:12.043777 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerStarted","Data":"c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3"} Feb 18 11:57:12 crc kubenswrapper[4922]: I0218 11:57:12.990145 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" path="/var/lib/kubelet/pods/53371b07-a65f-4fec-8564-bcd51df6c010/volumes" Feb 18 11:57:14 crc kubenswrapper[4922]: I0218 11:57:14.067945 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerID="666106d3ce79297e2d8ed502af3f7dcfa2bfde51946ace3e35cdb24c5fc323a9" exitCode=137 Feb 18 11:57:14 crc kubenswrapper[4922]: I0218 11:57:14.068042 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d79df67b-mg9kq" event={"ID":"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9","Type":"ContainerDied","Data":"666106d3ce79297e2d8ed502af3f7dcfa2bfde51946ace3e35cdb24c5fc323a9"} Feb 18 11:57:14 crc kubenswrapper[4922]: I0218 11:57:14.070710 4922 generic.go:334] "Generic (PLEG): container finished" podID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerID="d5a387d1fd841c5e7a8b16f873f082fef6f77bf7e19af0404c6f60662142d1da" exitCode=0 Feb 18 11:57:14 crc kubenswrapper[4922]: I0218 11:57:14.070744 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74a83ecb-de31-4767-a178-bccf8a37e93e","Type":"ContainerDied","Data":"d5a387d1fd841c5e7a8b16f873f082fef6f77bf7e19af0404c6f60662142d1da"} Feb 18 11:57:14 crc kubenswrapper[4922]: I0218 11:57:14.678473 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:14 crc kubenswrapper[4922]: I0218 11:57:14.682623 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bb9876df9-jt7kg" Feb 18 11:57:15 crc kubenswrapper[4922]: I0218 11:57:15.436218 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:15 crc kubenswrapper[4922]: I0218 11:57:15.436599 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:15 crc kubenswrapper[4922]: I0218 11:57:15.478510 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:15 crc kubenswrapper[4922]: I0218 11:57:15.483540 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:16 crc kubenswrapper[4922]: I0218 11:57:16.106622 4922 generic.go:334] "Generic (PLEG): container finished" podID="4bcd3608-244b-44f0-be1f-5d953cd35964" containerID="b2705f7646a829371102cfea131123135c7490beec56768c960d999d9c5fa2de" exitCode=0 Feb 18 11:57:16 crc kubenswrapper[4922]: I0218 11:57:16.108509 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zjb6x" event={"ID":"4bcd3608-244b-44f0-be1f-5d953cd35964","Type":"ContainerDied","Data":"b2705f7646a829371102cfea131123135c7490beec56768c960d999d9c5fa2de"} Feb 18 11:57:16 crc kubenswrapper[4922]: I0218 11:57:16.108557 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:16 crc kubenswrapper[4922]: I0218 11:57:16.109321 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:16 crc kubenswrapper[4922]: I0218 11:57:16.559652 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 11:57:16 crc kubenswrapper[4922]: I0218 11:57:16.863819 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:17 crc kubenswrapper[4922]: I0218 11:57:17.438048 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 11:57:17 crc kubenswrapper[4922]: I0218 11:57:17.438452 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 11:57:17 crc kubenswrapper[4922]: I0218 11:57:17.505814 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 11:57:17 crc kubenswrapper[4922]: I0218 11:57:17.522279 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.130778 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.130807 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.133497 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.133552 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.289509 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.389188 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9d79df67b-mg9kq" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.676433 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.869326 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvmgk\" (UniqueName: \"kubernetes.io/projected/4bcd3608-244b-44f0-be1f-5d953cd35964-kube-api-access-kvmgk\") pod \"4bcd3608-244b-44f0-be1f-5d953cd35964\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.869585 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-combined-ca-bundle\") pod \"4bcd3608-244b-44f0-be1f-5d953cd35964\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.869731 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-config\") pod \"4bcd3608-244b-44f0-be1f-5d953cd35964\" (UID: \"4bcd3608-244b-44f0-be1f-5d953cd35964\") " Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.878662 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bcd3608-244b-44f0-be1f-5d953cd35964-kube-api-access-kvmgk" (OuterVolumeSpecName: "kube-api-access-kvmgk") pod "4bcd3608-244b-44f0-be1f-5d953cd35964" (UID: "4bcd3608-244b-44f0-be1f-5d953cd35964"). InnerVolumeSpecName "kube-api-access-kvmgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.958405 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-config" (OuterVolumeSpecName: "config") pod "4bcd3608-244b-44f0-be1f-5d953cd35964" (UID: "4bcd3608-244b-44f0-be1f-5d953cd35964"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.964390 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bcd3608-244b-44f0-be1f-5d953cd35964" (UID: "4bcd3608-244b-44f0-be1f-5d953cd35964"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.982008 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.982051 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4bcd3608-244b-44f0-be1f-5d953cd35964-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:18 crc kubenswrapper[4922]: I0218 11:57:18.982068 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvmgk\" (UniqueName: \"kubernetes.io/projected/4bcd3608-244b-44f0-be1f-5d953cd35964-kube-api-access-kvmgk\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.141757 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.163536 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d79df67b-mg9kq" event={"ID":"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9","Type":"ContainerDied","Data":"daa7d2ea30da11dcfd9f4970f896f52151306629ac261d217fb3c7d7a4b261fd"} Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.163592 4922 scope.go:117] "RemoveContainer" containerID="5d696a6b456c46bcc03a5a4e42955c244ebb50a3f0ae3ca3ca3bec18dc6258ea" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.163782 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d79df67b-mg9kq" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.189946 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zjb6x" event={"ID":"4bcd3608-244b-44f0-be1f-5d953cd35964","Type":"ContainerDied","Data":"4f713717bbd69a1844002c6344555c40f26be59a2b8b6c3086945e62b2e3a5ca"} Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.189987 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f713717bbd69a1844002c6344555c40f26be59a2b8b6c3086945e62b2e3a5ca" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.191168 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zjb6x" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.288016 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-logs\") pod \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.288070 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-tls-certs\") pod \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.288160 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-secret-key\") pod \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.288185 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvkcr\" (UniqueName: \"kubernetes.io/projected/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-kube-api-access-kvkcr\") pod \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.288254 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-scripts\") pod \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.288313 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-combined-ca-bundle\") pod \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.288429 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-config-data\") pod \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\" (UID: \"e3b4cee1-5234-4b6c-93fa-3cb5687ecba9\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.289968 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-logs" (OuterVolumeSpecName: "logs") pod "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" (UID: "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.298547 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" (UID: "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.302553 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-kube-api-access-kvkcr" (OuterVolumeSpecName: "kube-api-access-kvkcr") pod "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" (UID: "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9"). InnerVolumeSpecName "kube-api-access-kvkcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.329478 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c7b84785b-f8lmj" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.342026 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-config-data" (OuterVolumeSpecName: "config-data") pod "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" (UID: "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.357429 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" (UID: "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.387976 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-scripts" (OuterVolumeSpecName: "scripts") pod "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" (UID: "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.399442 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.399486 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.399499 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.399513 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvkcr\" (UniqueName: \"kubernetes.io/projected/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-kube-api-access-kvkcr\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.399526 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.399538 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.411510 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" (UID: "e3b4cee1-5234-4b6c-93fa-3cb5687ecba9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.494828 4922 scope.go:117] "RemoveContainer" containerID="666106d3ce79297e2d8ed502af3f7dcfa2bfde51946ace3e35cdb24c5fc323a9" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.501192 4922 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.600281 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9d79df67b-mg9kq"] Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.619839 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.657338 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9d79df67b-mg9kq"] Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.813023 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-scripts\") pod \"74a83ecb-de31-4767-a178-bccf8a37e93e\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.813287 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r77f\" (UniqueName: \"kubernetes.io/projected/74a83ecb-de31-4767-a178-bccf8a37e93e-kube-api-access-4r77f\") pod \"74a83ecb-de31-4767-a178-bccf8a37e93e\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.813338 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data-custom\") pod \"74a83ecb-de31-4767-a178-bccf8a37e93e\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.813458 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-combined-ca-bundle\") pod \"74a83ecb-de31-4767-a178-bccf8a37e93e\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.813489 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a83ecb-de31-4767-a178-bccf8a37e93e-etc-machine-id\") pod \"74a83ecb-de31-4767-a178-bccf8a37e93e\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.813524 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data\") pod \"74a83ecb-de31-4767-a178-bccf8a37e93e\" (UID: \"74a83ecb-de31-4767-a178-bccf8a37e93e\") " Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.830466 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "74a83ecb-de31-4767-a178-bccf8a37e93e" (UID: "74a83ecb-de31-4767-a178-bccf8a37e93e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.830777 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a83ecb-de31-4767-a178-bccf8a37e93e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "74a83ecb-de31-4767-a178-bccf8a37e93e" (UID: "74a83ecb-de31-4767-a178-bccf8a37e93e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.831152 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-scripts" (OuterVolumeSpecName: "scripts") pod "74a83ecb-de31-4767-a178-bccf8a37e93e" (UID: "74a83ecb-de31-4767-a178-bccf8a37e93e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.842930 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a83ecb-de31-4767-a178-bccf8a37e93e-kube-api-access-4r77f" (OuterVolumeSpecName: "kube-api-access-4r77f") pod "74a83ecb-de31-4767-a178-bccf8a37e93e" (UID: "74a83ecb-de31-4767-a178-bccf8a37e93e"). InnerVolumeSpecName "kube-api-access-4r77f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.915447 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r77f\" (UniqueName: \"kubernetes.io/projected/74a83ecb-de31-4767-a178-bccf8a37e93e-kube-api-access-4r77f\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.915484 4922 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.915496 4922 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a83ecb-de31-4767-a178-bccf8a37e93e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:19 crc kubenswrapper[4922]: I0218 11:57:19.915508 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.003725 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74a83ecb-de31-4767-a178-bccf8a37e93e" (UID: "74a83ecb-de31-4767-a178-bccf8a37e93e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.019179 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.082437 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-78vwz"] Feb 18 11:57:20 crc kubenswrapper[4922]: E0218 11:57:20.082920 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon-log" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.082934 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon-log" Feb 18 11:57:20 crc kubenswrapper[4922]: E0218 11:57:20.082953 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.082959 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" Feb 18 11:57:20 crc kubenswrapper[4922]: E0218 11:57:20.082994 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="probe" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083001 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="probe" Feb 18 11:57:20 crc kubenswrapper[4922]: E0218 11:57:20.083013 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083019 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api" Feb 18 11:57:20 crc kubenswrapper[4922]: E0218 11:57:20.083031 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="cinder-scheduler" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083037 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="cinder-scheduler" Feb 18 11:57:20 crc kubenswrapper[4922]: E0218 11:57:20.083048 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcd3608-244b-44f0-be1f-5d953cd35964" containerName="neutron-db-sync" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083054 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcd3608-244b-44f0-be1f-5d953cd35964" containerName="neutron-db-sync" Feb 18 11:57:20 crc kubenswrapper[4922]: E0218 11:57:20.083061 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083067 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083246 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon-log" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083262 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api-log" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083272 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="53371b07-a65f-4fec-8564-bcd51df6c010" containerName="barbican-api" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083284 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcd3608-244b-44f0-be1f-5d953cd35964" containerName="neutron-db-sync" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083296 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" containerName="horizon" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083305 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="cinder-scheduler" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.083313 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" containerName="probe" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.084356 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.137535 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-78vwz"] Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.222969 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-979b8465b-gmztk"] Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.226591 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.231440 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.231723 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bmh7l" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.232311 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.232471 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.234351 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-config\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.234462 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mphx\" (UniqueName: \"kubernetes.io/projected/d36f2285-2752-4cad-bf52-fe6ae0b262d1-kube-api-access-4mphx\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.234535 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.234647 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.234677 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.234745 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-svc\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.250963 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"245b1cb9-d98f-4875-adf6-ab887f76849d","Type":"ContainerStarted","Data":"3cb7398179b3d5a8982fa17deb6a397be93a34b5df70b46d6194d91e2f51206b"} Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.265733 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-979b8465b-gmztk"] Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.288350 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.288668 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.288564 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.288589 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74a83ecb-de31-4767-a178-bccf8a37e93e","Type":"ContainerDied","Data":"94eaad4dfdfd79b9d6e5320a8b3266331430b86c446a5d0f1d18cea2bf387427"} Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.289839 4922 scope.go:117] "RemoveContainer" containerID="fc4b0d98b4a0ffc17082b832ce25867774e01fb1582211f0e74f81571c524a12" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.297275 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data" (OuterVolumeSpecName: "config-data") pod "74a83ecb-de31-4767-a178-bccf8a37e93e" (UID: "74a83ecb-de31-4767-a178-bccf8a37e93e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.314769 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.193801735 podStartE2EDuration="27.314748392s" podCreationTimestamp="2026-02-18 11:56:53 +0000 UTC" firstStartedPulling="2026-02-18 11:56:54.73651391 +0000 UTC m=+1216.464217990" lastFinishedPulling="2026-02-18 11:57:18.857460567 +0000 UTC m=+1240.585164647" observedRunningTime="2026-02-18 11:57:20.281747338 +0000 UTC m=+1242.009451428" watchObservedRunningTime="2026-02-18 11:57:20.314748392 +0000 UTC m=+1242.042452472" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.332644 4922 scope.go:117] "RemoveContainer" containerID="d5a387d1fd841c5e7a8b16f873f082fef6f77bf7e19af0404c6f60662142d1da" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336125 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-combined-ca-bundle\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336184 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mphx\" (UniqueName: \"kubernetes.io/projected/d36f2285-2752-4cad-bf52-fe6ae0b262d1-kube-api-access-4mphx\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336218 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336259 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjthv\" (UniqueName: \"kubernetes.io/projected/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-kube-api-access-vjthv\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336294 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-config\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336337 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336451 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336532 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-svc\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336557 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-ovndb-tls-certs\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336612 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-config\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336651 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-httpd-config\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.336731 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83ecb-de31-4767-a178-bccf8a37e93e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.337335 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.337731 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.338280 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.339509 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-svc\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.340145 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-config\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.364051 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mphx\" (UniqueName: \"kubernetes.io/projected/d36f2285-2752-4cad-bf52-fe6ae0b262d1-kube-api-access-4mphx\") pod \"dnsmasq-dns-6578955fd5-78vwz\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.449040 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjthv\" (UniqueName: \"kubernetes.io/projected/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-kube-api-access-vjthv\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.449718 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-config\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.450307 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-ovndb-tls-certs\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.450666 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-httpd-config\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.450977 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-combined-ca-bundle\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.458173 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-ovndb-tls-certs\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.459616 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-config\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.461204 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-combined-ca-bundle\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.475321 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.478068 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-httpd-config\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.491317 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjthv\" (UniqueName: \"kubernetes.io/projected/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-kube-api-access-vjthv\") pod \"neutron-979b8465b-gmztk\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.567716 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.802773 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.838412 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.868479 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.880158 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.899778 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.929281 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.969025 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.969078 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd3cd2cf-8780-4de2-925c-5385d6398e49-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.969123 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc26d\" (UniqueName: \"kubernetes.io/projected/bd3cd2cf-8780-4de2-925c-5385d6398e49-kube-api-access-kc26d\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.969167 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.969343 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:20 crc kubenswrapper[4922]: I0218 11:57:20.970238 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.012907 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a83ecb-de31-4767-a178-bccf8a37e93e" path="/var/lib/kubelet/pods/74a83ecb-de31-4767-a178-bccf8a37e93e/volumes" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.013851 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b4cee1-5234-4b6c-93fa-3cb5687ecba9" path="/var/lib/kubelet/pods/e3b4cee1-5234-4b6c-93fa-3cb5687ecba9/volumes" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.071456 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.073476 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.073503 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd3cd2cf-8780-4de2-925c-5385d6398e49-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.073541 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc26d\" (UniqueName: \"kubernetes.io/projected/bd3cd2cf-8780-4de2-925c-5385d6398e49-kube-api-access-kc26d\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.073591 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.073823 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.073982 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd3cd2cf-8780-4de2-925c-5385d6398e49-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.080616 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.081077 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.083414 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.083850 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd3cd2cf-8780-4de2-925c-5385d6398e49-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.104081 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc26d\" (UniqueName: \"kubernetes.io/projected/bd3cd2cf-8780-4de2-925c-5385d6398e49-kube-api-access-kc26d\") pod \"cinder-scheduler-0\" (UID: \"bd3cd2cf-8780-4de2-925c-5385d6398e49\") " pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.171135 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-78vwz"] Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.238052 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.375163 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" event={"ID":"d36f2285-2752-4cad-bf52-fe6ae0b262d1","Type":"ContainerStarted","Data":"f6b2696bce7ccb6880bdda930a5ccfaf927c9ebc64dad81fb193a050fd9b8c85"} Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.381495 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerStarted","Data":"69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e"} Feb 18 11:57:21 crc kubenswrapper[4922]: I0218 11:57:21.468544 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-979b8465b-gmztk"] Feb 18 11:57:21 crc kubenswrapper[4922]: W0218 11:57:21.489699 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b1ea57e_dcf2_4e47_8650_af483b18ea8f.slice/crio-d41035e2a8cf606b335f710a1d0f91690ca9a161e3cc25d246206a6e3ab38420 WatchSource:0}: Error finding container d41035e2a8cf606b335f710a1d0f91690ca9a161e3cc25d246206a6e3ab38420: Status 404 returned error can't find the container with id d41035e2a8cf606b335f710a1d0f91690ca9a161e3cc25d246206a6e3ab38420 Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.028909 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 11:57:22 crc kubenswrapper[4922]: W0218 11:57:22.064727 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd3cd2cf_8780_4de2_925c_5385d6398e49.slice/crio-9f7f3452ee5fcb60297649d005b3ab7f8df412c6cf50a9719e163931e16c2987 WatchSource:0}: Error finding container 9f7f3452ee5fcb60297649d005b3ab7f8df412c6cf50a9719e163931e16c2987: Status 404 returned error can't find the container with id 9f7f3452ee5fcb60297649d005b3ab7f8df412c6cf50a9719e163931e16c2987 Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.394103 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd3cd2cf-8780-4de2-925c-5385d6398e49","Type":"ContainerStarted","Data":"9f7f3452ee5fcb60297649d005b3ab7f8df412c6cf50a9719e163931e16c2987"} Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.397633 4922 generic.go:334] "Generic (PLEG): container finished" podID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" containerID="839de4434ebe21a5f0abbc718b56284e0f7743bf3463c809b6cae16fa7c2db5d" exitCode=0 Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.397706 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" event={"ID":"d36f2285-2752-4cad-bf52-fe6ae0b262d1","Type":"ContainerDied","Data":"839de4434ebe21a5f0abbc718b56284e0f7743bf3463c809b6cae16fa7c2db5d"} Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.400895 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979b8465b-gmztk" event={"ID":"6b1ea57e-dcf2-4e47-8650-af483b18ea8f","Type":"ContainerStarted","Data":"c50b1f228a346e798dee756ffe7172e5752cdc2bb9dd07e1f2ab8318d4e33a78"} Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.400939 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979b8465b-gmztk" event={"ID":"6b1ea57e-dcf2-4e47-8650-af483b18ea8f","Type":"ContainerStarted","Data":"d41035e2a8cf606b335f710a1d0f91690ca9a161e3cc25d246206a6e3ab38420"} Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.403608 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.403727 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.407219 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.407397 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.437133 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.740861 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.828543 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f57669c89-7wt5g"] Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.844860 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.861552 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.861811 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.873075 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f57669c89-7wt5g"] Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.969576 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-httpd-config\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.969661 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-config\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.969728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-internal-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.969756 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klz69\" (UniqueName: \"kubernetes.io/projected/49aa13b6-3343-43d5-949e-3118c1711ed0-kube-api-access-klz69\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.969822 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-ovndb-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.969841 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-public-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:22 crc kubenswrapper[4922]: I0218 11:57:22.969881 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-combined-ca-bundle\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.071342 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-ovndb-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.071522 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-public-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.071749 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-combined-ca-bundle\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.071894 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-httpd-config\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.072089 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-config\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.072319 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-internal-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.072401 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klz69\" (UniqueName: \"kubernetes.io/projected/49aa13b6-3343-43d5-949e-3118c1711ed0-kube-api-access-klz69\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.079635 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-httpd-config\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.086311 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-public-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.087099 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-ovndb-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.087927 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-config\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.088565 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-internal-tls-certs\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.090487 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49aa13b6-3343-43d5-949e-3118c1711ed0-combined-ca-bundle\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.113556 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klz69\" (UniqueName: \"kubernetes.io/projected/49aa13b6-3343-43d5-949e-3118c1711ed0-kube-api-access-klz69\") pod \"neutron-f57669c89-7wt5g\" (UID: \"49aa13b6-3343-43d5-949e-3118c1711ed0\") " pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.223721 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.447316 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd3cd2cf-8780-4de2-925c-5385d6398e49","Type":"ContainerStarted","Data":"9228fcb9048d9089f519e9840c6b71316b8c7d75be28af3c658ae2e0a85e1545"} Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.458035 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" event={"ID":"d36f2285-2752-4cad-bf52-fe6ae0b262d1","Type":"ContainerStarted","Data":"dcc50eb32469aa9ff5b74469bad439e5bb4a6c4458c1ea0f88da9a06c0a2437a"} Feb 18 11:57:23 crc kubenswrapper[4922]: I0218 11:57:23.458667 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.101963 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" podStartSLOduration=5.101945297 podStartE2EDuration="5.101945297s" podCreationTimestamp="2026-02-18 11:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:23.502206538 +0000 UTC m=+1245.229910638" watchObservedRunningTime="2026-02-18 11:57:24.101945297 +0000 UTC m=+1245.829649377" Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.106693 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f57669c89-7wt5g"] Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.468722 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f57669c89-7wt5g" event={"ID":"49aa13b6-3343-43d5-949e-3118c1711ed0","Type":"ContainerStarted","Data":"7530ef8b8060cfcd0f02e18c4c18bf4b5026b767146c5c28f0afb184447def2f"} Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.471649 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979b8465b-gmztk" event={"ID":"6b1ea57e-dcf2-4e47-8650-af483b18ea8f","Type":"ContainerStarted","Data":"dd50276aac03478ea25adcde79bcd1337f3694428dc64ce90dcf6fe92e773c3c"} Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.471825 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.476482 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerStarted","Data":"5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34"} Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.476911 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="proxy-httpd" containerID="cri-o://5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34" gracePeriod=30 Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.476951 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="ceilometer-notification-agent" containerID="cri-o://c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3" gracePeriod=30 Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.476907 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="ceilometer-central-agent" containerID="cri-o://f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9" gracePeriod=30 Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.476949 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="sg-core" containerID="cri-o://69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e" gracePeriod=30 Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.502912 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-979b8465b-gmztk" podStartSLOduration=4.502892851 podStartE2EDuration="4.502892851s" podCreationTimestamp="2026-02-18 11:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:24.497202987 +0000 UTC m=+1246.224907077" watchObservedRunningTime="2026-02-18 11:57:24.502892851 +0000 UTC m=+1246.230596931" Feb 18 11:57:24 crc kubenswrapper[4922]: I0218 11:57:24.525124 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.760810546 podStartE2EDuration="16.525102383s" podCreationTimestamp="2026-02-18 11:57:08 +0000 UTC" firstStartedPulling="2026-02-18 11:57:09.36797671 +0000 UTC m=+1231.095680800" lastFinishedPulling="2026-02-18 11:57:23.132268557 +0000 UTC m=+1244.859972637" observedRunningTime="2026-02-18 11:57:24.524906238 +0000 UTC m=+1246.252610318" watchObservedRunningTime="2026-02-18 11:57:24.525102383 +0000 UTC m=+1246.252806463" Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.501171 4922 generic.go:334] "Generic (PLEG): container finished" podID="72a01906-824d-4581-8d88-7d40a91786a1" containerID="5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34" exitCode=0 Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.501789 4922 generic.go:334] "Generic (PLEG): container finished" podID="72a01906-824d-4581-8d88-7d40a91786a1" containerID="69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e" exitCode=2 Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.501804 4922 generic.go:334] "Generic (PLEG): container finished" podID="72a01906-824d-4581-8d88-7d40a91786a1" containerID="f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9" exitCode=0 Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.501923 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerDied","Data":"5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34"} Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.501957 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerDied","Data":"69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e"} Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.501972 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerDied","Data":"f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9"} Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.524153 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f57669c89-7wt5g" event={"ID":"49aa13b6-3343-43d5-949e-3118c1711ed0","Type":"ContainerStarted","Data":"6c9993c754311fb00ea2dde49190e698127552602ae7569fc62de61cbd67aee0"} Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.524225 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f57669c89-7wt5g" event={"ID":"49aa13b6-3343-43d5-949e-3118c1711ed0","Type":"ContainerStarted","Data":"9aa3a8ed7f07eb52dcfaf3d9aa4b738c355b70169e828b85dff70e19d80a137b"} Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.526639 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.536330 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd3cd2cf-8780-4de2-925c-5385d6398e49","Type":"ContainerStarted","Data":"5266a6b6549185d692cec1e8a671b0c2fae8160f00d1dd69abb4f69e1b2e0601"} Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.564113 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f57669c89-7wt5g" podStartSLOduration=3.564087833 podStartE2EDuration="3.564087833s" podCreationTimestamp="2026-02-18 11:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:25.559382484 +0000 UTC m=+1247.287086574" watchObservedRunningTime="2026-02-18 11:57:25.564087833 +0000 UTC m=+1247.291791913" Feb 18 11:57:25 crc kubenswrapper[4922]: I0218 11:57:25.601110 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.601087478 podStartE2EDuration="5.601087478s" podCreationTimestamp="2026-02-18 11:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:25.581537854 +0000 UTC m=+1247.309241934" watchObservedRunningTime="2026-02-18 11:57:25.601087478 +0000 UTC m=+1247.328791558" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.119984 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.239607 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.248681 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-scripts\") pod \"72a01906-824d-4581-8d88-7d40a91786a1\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.248855 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-sg-core-conf-yaml\") pod \"72a01906-824d-4581-8d88-7d40a91786a1\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.248926 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-run-httpd\") pod \"72a01906-824d-4581-8d88-7d40a91786a1\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.248962 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-combined-ca-bundle\") pod \"72a01906-824d-4581-8d88-7d40a91786a1\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.248995 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-config-data\") pod \"72a01906-824d-4581-8d88-7d40a91786a1\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.249041 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqwwp\" (UniqueName: \"kubernetes.io/projected/72a01906-824d-4581-8d88-7d40a91786a1-kube-api-access-sqwwp\") pod \"72a01906-824d-4581-8d88-7d40a91786a1\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.249106 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-log-httpd\") pod \"72a01906-824d-4581-8d88-7d40a91786a1\" (UID: \"72a01906-824d-4581-8d88-7d40a91786a1\") " Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.249858 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72a01906-824d-4581-8d88-7d40a91786a1" (UID: "72a01906-824d-4581-8d88-7d40a91786a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.249927 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72a01906-824d-4581-8d88-7d40a91786a1" (UID: "72a01906-824d-4581-8d88-7d40a91786a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.250473 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.250493 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72a01906-824d-4581-8d88-7d40a91786a1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.258808 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a01906-824d-4581-8d88-7d40a91786a1-kube-api-access-sqwwp" (OuterVolumeSpecName: "kube-api-access-sqwwp") pod "72a01906-824d-4581-8d88-7d40a91786a1" (UID: "72a01906-824d-4581-8d88-7d40a91786a1"). InnerVolumeSpecName "kube-api-access-sqwwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.259674 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-scripts" (OuterVolumeSpecName: "scripts") pod "72a01906-824d-4581-8d88-7d40a91786a1" (UID: "72a01906-824d-4581-8d88-7d40a91786a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.304240 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72a01906-824d-4581-8d88-7d40a91786a1" (UID: "72a01906-824d-4581-8d88-7d40a91786a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.349690 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72a01906-824d-4581-8d88-7d40a91786a1" (UID: "72a01906-824d-4581-8d88-7d40a91786a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.354439 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.354472 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.354486 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqwwp\" (UniqueName: \"kubernetes.io/projected/72a01906-824d-4581-8d88-7d40a91786a1-kube-api-access-sqwwp\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.354498 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.404698 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-config-data" (OuterVolumeSpecName: "config-data") pod "72a01906-824d-4581-8d88-7d40a91786a1" (UID: "72a01906-824d-4581-8d88-7d40a91786a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.456211 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a01906-824d-4581-8d88-7d40a91786a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.550270 4922 generic.go:334] "Generic (PLEG): container finished" podID="72a01906-824d-4581-8d88-7d40a91786a1" containerID="c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3" exitCode=0 Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.551133 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.551303 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerDied","Data":"c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3"} Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.551579 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72a01906-824d-4581-8d88-7d40a91786a1","Type":"ContainerDied","Data":"e51f7f2faa63d11b52bb16edb526931063add3c924782fc45c0056ce678908a1"} Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.551679 4922 scope.go:117] "RemoveContainer" containerID="5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.594496 4922 scope.go:117] "RemoveContainer" containerID="69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.596494 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.609113 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.618938 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:26 crc kubenswrapper[4922]: E0218 11:57:26.619448 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="proxy-httpd" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.619469 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="proxy-httpd" Feb 18 11:57:26 crc kubenswrapper[4922]: E0218 11:57:26.619503 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="ceilometer-central-agent" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.619511 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="ceilometer-central-agent" Feb 18 11:57:26 crc kubenswrapper[4922]: E0218 11:57:26.619532 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="sg-core" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.619538 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="sg-core" Feb 18 11:57:26 crc kubenswrapper[4922]: E0218 11:57:26.619549 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="ceilometer-notification-agent" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.619556 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="ceilometer-notification-agent" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.619731 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="sg-core" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.619747 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="ceilometer-central-agent" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.619760 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="ceilometer-notification-agent" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.619784 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a01906-824d-4581-8d88-7d40a91786a1" containerName="proxy-httpd" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.620325 4922 scope.go:117] "RemoveContainer" containerID="c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.624491 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.627454 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.627509 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.648573 4922 scope.go:117] "RemoveContainer" containerID="f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.664920 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.697645 4922 scope.go:117] "RemoveContainer" containerID="5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34" Feb 18 11:57:26 crc kubenswrapper[4922]: E0218 11:57:26.698684 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34\": container with ID starting with 5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34 not found: ID does not exist" containerID="5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.698735 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34"} err="failed to get container status \"5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34\": rpc error: code = NotFound desc = could not find container \"5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34\": container with ID starting with 5743fc1cc15b1c94b293ee77602825aaa6c5b1bdde4fb72ac39f92a37ff15f34 not found: ID does not exist" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.698764 4922 scope.go:117] "RemoveContainer" containerID="69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e" Feb 18 11:57:26 crc kubenswrapper[4922]: E0218 11:57:26.700146 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e\": container with ID starting with 69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e not found: ID does not exist" containerID="69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.700176 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e"} err="failed to get container status \"69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e\": rpc error: code = NotFound desc = could not find container \"69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e\": container with ID starting with 69736a38daeb8610262f32bde5f3bd06b607bcef34f4fbe4b57e58aae6aee07e not found: ID does not exist" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.700190 4922 scope.go:117] "RemoveContainer" containerID="c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3" Feb 18 11:57:26 crc kubenswrapper[4922]: E0218 11:57:26.700514 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3\": container with ID starting with c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3 not found: ID does not exist" containerID="c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.700551 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3"} err="failed to get container status \"c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3\": rpc error: code = NotFound desc = could not find container \"c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3\": container with ID starting with c4b4f11931af9a7b58f5f6e6efc41376cb3e1dcbdd51081feef30ab45a264ba3 not found: ID does not exist" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.700567 4922 scope.go:117] "RemoveContainer" containerID="f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9" Feb 18 11:57:26 crc kubenswrapper[4922]: E0218 11:57:26.700956 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9\": container with ID starting with f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9 not found: ID does not exist" containerID="f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.700988 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9"} err="failed to get container status \"f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9\": rpc error: code = NotFound desc = could not find container \"f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9\": container with ID starting with f76ae30b441313706d2b84e1c38399b3498bd5d4bdc045db95788de5d490e8e9 not found: ID does not exist" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.761341 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26464\" (UniqueName: \"kubernetes.io/projected/3ad26da5-56a1-4f67-aae9-ab321499352f-kube-api-access-26464\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.761426 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-config-data\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.761470 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-scripts\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.761500 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.761533 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-run-httpd\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.761556 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-log-httpd\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.761597 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.864273 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-run-httpd\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.864632 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-log-httpd\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.864701 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.864771 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26464\" (UniqueName: \"kubernetes.io/projected/3ad26da5-56a1-4f67-aae9-ab321499352f-kube-api-access-26464\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.864825 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-config-data\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.864874 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-scripts\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.864890 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-run-httpd\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.864916 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.865642 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-log-httpd\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.871959 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-scripts\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.872526 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.874495 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.874716 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-config-data\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.890399 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26464\" (UniqueName: \"kubernetes.io/projected/3ad26da5-56a1-4f67-aae9-ab321499352f-kube-api-access-26464\") pod \"ceilometer-0\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " pod="openstack/ceilometer-0" Feb 18 11:57:26 crc kubenswrapper[4922]: I0218 11:57:26.950348 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:27 crc kubenswrapper[4922]: I0218 11:57:27.029201 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a01906-824d-4581-8d88-7d40a91786a1" path="/var/lib/kubelet/pods/72a01906-824d-4581-8d88-7d40a91786a1/volumes" Feb 18 11:57:27 crc kubenswrapper[4922]: I0218 11:57:27.487277 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:27 crc kubenswrapper[4922]: W0218 11:57:27.497875 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ad26da5_56a1_4f67_aae9_ab321499352f.slice/crio-4ab86a79e660c19ee3ee370625c973f496ff01d2184f2823cb5772f2dd377459 WatchSource:0}: Error finding container 4ab86a79e660c19ee3ee370625c973f496ff01d2184f2823cb5772f2dd377459: Status 404 returned error can't find the container with id 4ab86a79e660c19ee3ee370625c973f496ff01d2184f2823cb5772f2dd377459 Feb 18 11:57:27 crc kubenswrapper[4922]: I0218 11:57:27.571819 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerStarted","Data":"4ab86a79e660c19ee3ee370625c973f496ff01d2184f2823cb5772f2dd377459"} Feb 18 11:57:28 crc kubenswrapper[4922]: I0218 11:57:28.582849 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerStarted","Data":"4f7ad4cbfa0275aa87aaa646fee7583ae6b1acc937b50ade063f9b7ab68d66a4"} Feb 18 11:57:29 crc kubenswrapper[4922]: I0218 11:57:29.085994 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:29 crc kubenswrapper[4922]: I0218 11:57:29.601498 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerStarted","Data":"6840f0d264514efcd94bc4a02aec112cf5f76e7909fa91e4880bd5ff5ad6f1bc"} Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.369810 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.378590 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerName="glance-log" containerID="cri-o://2c9d001f560c7f93208a3584bc230854257ebcad49ef86ea326e6025425c596a" gracePeriod=30 Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.378769 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerName="glance-httpd" containerID="cri-o://fae0743e5e0fb73717e68120a40713dc2bcdd3fa357699afed1bff0f5a1368e5" gracePeriod=30 Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.477275 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.646740 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v"] Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.647278 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" podUID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" containerName="dnsmasq-dns" containerID="cri-o://6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674" gracePeriod=10 Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.670026 4922 generic.go:334] "Generic (PLEG): container finished" podID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerID="2c9d001f560c7f93208a3584bc230854257ebcad49ef86ea326e6025425c596a" exitCode=143 Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.670163 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24eb828d-acb3-4b88-96dc-8d3bb8c49e86","Type":"ContainerDied","Data":"2c9d001f560c7f93208a3584bc230854257ebcad49ef86ea326e6025425c596a"} Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.709112 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerStarted","Data":"0426fee054517907fa41850ee4ebf4b625cb2e9874199241df43ce1756e27ff1"} Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.763146 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p2pzf"] Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.764899 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.786463 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p2pzf"] Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.788427 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmnpw\" (UniqueName: \"kubernetes.io/projected/157bc07b-77b8-4a29-b8e0-9a205215187b-kube-api-access-mmnpw\") pod \"nova-api-db-create-p2pzf\" (UID: \"157bc07b-77b8-4a29-b8e0-9a205215187b\") " pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.788553 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157bc07b-77b8-4a29-b8e0-9a205215187b-operator-scripts\") pod \"nova-api-db-create-p2pzf\" (UID: \"157bc07b-77b8-4a29-b8e0-9a205215187b\") " pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:30 crc kubenswrapper[4922]: E0218 11:57:30.790159 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24eb828d_acb3_4b88_96dc_8d3bb8c49e86.slice/crio-conmon-2c9d001f560c7f93208a3584bc230854257ebcad49ef86ea326e6025425c596a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24eb828d_acb3_4b88_96dc_8d3bb8c49e86.slice/crio-2c9d001f560c7f93208a3584bc230854257ebcad49ef86ea326e6025425c596a.scope\": RecentStats: unable to find data in memory cache]" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.851438 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-b9cbq"] Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.852608 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.890494 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmnpw\" (UniqueName: \"kubernetes.io/projected/157bc07b-77b8-4a29-b8e0-9a205215187b-kube-api-access-mmnpw\") pod \"nova-api-db-create-p2pzf\" (UID: \"157bc07b-77b8-4a29-b8e0-9a205215187b\") " pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.890572 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157bc07b-77b8-4a29-b8e0-9a205215187b-operator-scripts\") pod \"nova-api-db-create-p2pzf\" (UID: \"157bc07b-77b8-4a29-b8e0-9a205215187b\") " pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.890698 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea3a613-3571-4de4-be73-07a4db1c146e-operator-scripts\") pod \"nova-cell0-db-create-b9cbq\" (UID: \"cea3a613-3571-4de4-be73-07a4db1c146e\") " pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.890744 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9855v\" (UniqueName: \"kubernetes.io/projected/cea3a613-3571-4de4-be73-07a4db1c146e-kube-api-access-9855v\") pod \"nova-cell0-db-create-b9cbq\" (UID: \"cea3a613-3571-4de4-be73-07a4db1c146e\") " pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.892421 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157bc07b-77b8-4a29-b8e0-9a205215187b-operator-scripts\") pod \"nova-api-db-create-p2pzf\" (UID: \"157bc07b-77b8-4a29-b8e0-9a205215187b\") " pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.899468 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b9cbq"] Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.941207 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmnpw\" (UniqueName: \"kubernetes.io/projected/157bc07b-77b8-4a29-b8e0-9a205215187b-kube-api-access-mmnpw\") pod \"nova-api-db-create-p2pzf\" (UID: \"157bc07b-77b8-4a29-b8e0-9a205215187b\") " pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.956524 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-96dc-account-create-update-4px58"] Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.957945 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.960784 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.965395 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-96dc-account-create-update-4px58"] Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.992542 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxsg\" (UniqueName: \"kubernetes.io/projected/7513cf0a-f653-48b9-a365-9732179aaffc-kube-api-access-vbxsg\") pod \"nova-api-96dc-account-create-update-4px58\" (UID: \"7513cf0a-f653-48b9-a365-9732179aaffc\") " pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.992649 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea3a613-3571-4de4-be73-07a4db1c146e-operator-scripts\") pod \"nova-cell0-db-create-b9cbq\" (UID: \"cea3a613-3571-4de4-be73-07a4db1c146e\") " pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.992686 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9855v\" (UniqueName: \"kubernetes.io/projected/cea3a613-3571-4de4-be73-07a4db1c146e-kube-api-access-9855v\") pod \"nova-cell0-db-create-b9cbq\" (UID: \"cea3a613-3571-4de4-be73-07a4db1c146e\") " pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.992778 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7513cf0a-f653-48b9-a365-9732179aaffc-operator-scripts\") pod \"nova-api-96dc-account-create-update-4px58\" (UID: \"7513cf0a-f653-48b9-a365-9732179aaffc\") " pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:30 crc kubenswrapper[4922]: I0218 11:57:30.993610 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea3a613-3571-4de4-be73-07a4db1c146e-operator-scripts\") pod \"nova-cell0-db-create-b9cbq\" (UID: \"cea3a613-3571-4de4-be73-07a4db1c146e\") " pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.031074 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9855v\" (UniqueName: \"kubernetes.io/projected/cea3a613-3571-4de4-be73-07a4db1c146e-kube-api-access-9855v\") pod \"nova-cell0-db-create-b9cbq\" (UID: \"cea3a613-3571-4de4-be73-07a4db1c146e\") " pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.046595 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-ddrmz"] Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.048161 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.057828 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ddrmz"] Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.096675 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxsg\" (UniqueName: \"kubernetes.io/projected/7513cf0a-f653-48b9-a365-9732179aaffc-kube-api-access-vbxsg\") pod \"nova-api-96dc-account-create-update-4px58\" (UID: \"7513cf0a-f653-48b9-a365-9732179aaffc\") " pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.096912 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7513cf0a-f653-48b9-a365-9732179aaffc-operator-scripts\") pod \"nova-api-96dc-account-create-update-4px58\" (UID: \"7513cf0a-f653-48b9-a365-9732179aaffc\") " pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.097811 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7513cf0a-f653-48b9-a365-9732179aaffc-operator-scripts\") pod \"nova-api-96dc-account-create-update-4px58\" (UID: \"7513cf0a-f653-48b9-a365-9732179aaffc\") " pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.137082 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxsg\" (UniqueName: \"kubernetes.io/projected/7513cf0a-f653-48b9-a365-9732179aaffc-kube-api-access-vbxsg\") pod \"nova-api-96dc-account-create-update-4px58\" (UID: \"7513cf0a-f653-48b9-a365-9732179aaffc\") " pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.161323 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ea20-account-create-update-k5t5k"] Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.162265 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.163323 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.169762 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ea20-account-create-update-k5t5k"] Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.172777 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.184679 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.200280 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4f87\" (UniqueName: \"kubernetes.io/projected/41c3abe9-3a81-44ef-babf-818b176f6437-kube-api-access-v4f87\") pod \"nova-cell1-db-create-ddrmz\" (UID: \"41c3abe9-3a81-44ef-babf-818b176f6437\") " pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.200448 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c3abe9-3a81-44ef-babf-818b176f6437-operator-scripts\") pod \"nova-cell1-db-create-ddrmz\" (UID: \"41c3abe9-3a81-44ef-babf-818b176f6437\") " pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.259619 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c19a-account-create-update-24shd"] Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.261270 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.272683 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.291998 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.295294 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c19a-account-create-update-24shd"] Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.304557 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9360e33-9ae9-4b84-a898-c2c22626a565-operator-scripts\") pod \"nova-cell1-c19a-account-create-update-24shd\" (UID: \"f9360e33-9ae9-4b84-a898-c2c22626a565\") " pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.304604 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh78w\" (UniqueName: \"kubernetes.io/projected/f9360e33-9ae9-4b84-a898-c2c22626a565-kube-api-access-rh78w\") pod \"nova-cell1-c19a-account-create-update-24shd\" (UID: \"f9360e33-9ae9-4b84-a898-c2c22626a565\") " pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.304652 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c3abe9-3a81-44ef-babf-818b176f6437-operator-scripts\") pod \"nova-cell1-db-create-ddrmz\" (UID: \"41c3abe9-3a81-44ef-babf-818b176f6437\") " pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.304707 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-operator-scripts\") pod \"nova-cell0-ea20-account-create-update-k5t5k\" (UID: \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\") " pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.304785 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4f87\" (UniqueName: \"kubernetes.io/projected/41c3abe9-3a81-44ef-babf-818b176f6437-kube-api-access-v4f87\") pod \"nova-cell1-db-create-ddrmz\" (UID: \"41c3abe9-3a81-44ef-babf-818b176f6437\") " pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.304822 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7dh4\" (UniqueName: \"kubernetes.io/projected/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-kube-api-access-k7dh4\") pod \"nova-cell0-ea20-account-create-update-k5t5k\" (UID: \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\") " pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.305737 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c3abe9-3a81-44ef-babf-818b176f6437-operator-scripts\") pod \"nova-cell1-db-create-ddrmz\" (UID: \"41c3abe9-3a81-44ef-babf-818b176f6437\") " pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.335225 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4f87\" (UniqueName: \"kubernetes.io/projected/41c3abe9-3a81-44ef-babf-818b176f6437-kube-api-access-v4f87\") pod \"nova-cell1-db-create-ddrmz\" (UID: \"41c3abe9-3a81-44ef-babf-818b176f6437\") " pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.406301 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7dh4\" (UniqueName: \"kubernetes.io/projected/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-kube-api-access-k7dh4\") pod \"nova-cell0-ea20-account-create-update-k5t5k\" (UID: \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\") " pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.406901 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9360e33-9ae9-4b84-a898-c2c22626a565-operator-scripts\") pod \"nova-cell1-c19a-account-create-update-24shd\" (UID: \"f9360e33-9ae9-4b84-a898-c2c22626a565\") " pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.406936 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh78w\" (UniqueName: \"kubernetes.io/projected/f9360e33-9ae9-4b84-a898-c2c22626a565-kube-api-access-rh78w\") pod \"nova-cell1-c19a-account-create-update-24shd\" (UID: \"f9360e33-9ae9-4b84-a898-c2c22626a565\") " pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.407039 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-operator-scripts\") pod \"nova-cell0-ea20-account-create-update-k5t5k\" (UID: \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\") " pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.410345 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9360e33-9ae9-4b84-a898-c2c22626a565-operator-scripts\") pod \"nova-cell1-c19a-account-create-update-24shd\" (UID: \"f9360e33-9ae9-4b84-a898-c2c22626a565\") " pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.414621 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-operator-scripts\") pod \"nova-cell0-ea20-account-create-update-k5t5k\" (UID: \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\") " pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.427998 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7dh4\" (UniqueName: \"kubernetes.io/projected/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-kube-api-access-k7dh4\") pod \"nova-cell0-ea20-account-create-update-k5t5k\" (UID: \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\") " pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.428669 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh78w\" (UniqueName: \"kubernetes.io/projected/f9360e33-9ae9-4b84-a898-c2c22626a565-kube-api-access-rh78w\") pod \"nova-cell1-c19a-account-create-update-24shd\" (UID: \"f9360e33-9ae9-4b84-a898-c2c22626a565\") " pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.439470 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.524652 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.602094 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.800475 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.817616 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.821513 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-sb\") pod \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.821591 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2htl\" (UniqueName: \"kubernetes.io/projected/d61bee1b-0ee4-4c97-8d5d-8655406f124c-kube-api-access-k2htl\") pod \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.821656 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-svc\") pod \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.821683 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-nb\") pod \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.821714 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-swift-storage-0\") pod \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.821822 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-config\") pod \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\" (UID: \"d61bee1b-0ee4-4c97-8d5d-8655406f124c\") " Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.863384 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61bee1b-0ee4-4c97-8d5d-8655406f124c-kube-api-access-k2htl" (OuterVolumeSpecName: "kube-api-access-k2htl") pod "d61bee1b-0ee4-4c97-8d5d-8655406f124c" (UID: "d61bee1b-0ee4-4c97-8d5d-8655406f124c"). InnerVolumeSpecName "kube-api-access-k2htl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.913221 4922 generic.go:334] "Generic (PLEG): container finished" podID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" containerID="6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674" exitCode=0 Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.913273 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" event={"ID":"d61bee1b-0ee4-4c97-8d5d-8655406f124c","Type":"ContainerDied","Data":"6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674"} Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.913326 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" event={"ID":"d61bee1b-0ee4-4c97-8d5d-8655406f124c","Type":"ContainerDied","Data":"18a7a7ab6ce87177724ccea1ae9a737d84614c82a96458fedc35807aee0f2e52"} Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.913349 4922 scope.go:117] "RemoveContainer" containerID="6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.913559 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.928612 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2htl\" (UniqueName: \"kubernetes.io/projected/d61bee1b-0ee4-4c97-8d5d-8655406f124c-kube-api-access-k2htl\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.966047 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-config" (OuterVolumeSpecName: "config") pod "d61bee1b-0ee4-4c97-8d5d-8655406f124c" (UID: "d61bee1b-0ee4-4c97-8d5d-8655406f124c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.976388 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p2pzf"] Feb 18 11:57:31 crc kubenswrapper[4922]: I0218 11:57:31.990459 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-b9cbq"] Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.003656 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d61bee1b-0ee4-4c97-8d5d-8655406f124c" (UID: "d61bee1b-0ee4-4c97-8d5d-8655406f124c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.009579 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d61bee1b-0ee4-4c97-8d5d-8655406f124c" (UID: "d61bee1b-0ee4-4c97-8d5d-8655406f124c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.040509 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.041202 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.041238 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.104862 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d61bee1b-0ee4-4c97-8d5d-8655406f124c" (UID: "d61bee1b-0ee4-4c97-8d5d-8655406f124c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.127561 4922 scope.go:117] "RemoveContainer" containerID="ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.144436 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.152892 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d61bee1b-0ee4-4c97-8d5d-8655406f124c" (UID: "d61bee1b-0ee4-4c97-8d5d-8655406f124c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.207440 4922 scope.go:117] "RemoveContainer" containerID="6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674" Feb 18 11:57:32 crc kubenswrapper[4922]: E0218 11:57:32.207901 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674\": container with ID starting with 6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674 not found: ID does not exist" containerID="6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.207941 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674"} err="failed to get container status \"6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674\": rpc error: code = NotFound desc = could not find container \"6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674\": container with ID starting with 6fc40bd77d62f143a53ef0627d5b912914ae270a819ef4018ea2e7de4e360674 not found: ID does not exist" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.207970 4922 scope.go:117] "RemoveContainer" containerID="ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61" Feb 18 11:57:32 crc kubenswrapper[4922]: E0218 11:57:32.208214 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61\": container with ID starting with ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61 not found: ID does not exist" containerID="ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.208239 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61"} err="failed to get container status \"ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61\": rpc error: code = NotFound desc = could not find container \"ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61\": container with ID starting with ce7a1f8b346e07ed0990be6cb946929c06f52e0b61b2bdd4bba88e18309b0f61 not found: ID does not exist" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.251078 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d61bee1b-0ee4-4c97-8d5d-8655406f124c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.280062 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v"] Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.297876 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-jrg5v"] Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.396634 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-96dc-account-create-update-4px58"] Feb 18 11:57:32 crc kubenswrapper[4922]: W0218 11:57:32.408820 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41c3abe9_3a81_44ef_babf_818b176f6437.slice/crio-6b97ca742e23751715b7c18d5f5f62f93230d2dcc1c0f65e7e7422315924cd92 WatchSource:0}: Error finding container 6b97ca742e23751715b7c18d5f5f62f93230d2dcc1c0f65e7e7422315924cd92: Status 404 returned error can't find the container with id 6b97ca742e23751715b7c18d5f5f62f93230d2dcc1c0f65e7e7422315924cd92 Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.409763 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ddrmz"] Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.562550 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ea20-account-create-update-k5t5k"] Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.834190 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c19a-account-create-update-24shd"] Feb 18 11:57:32 crc kubenswrapper[4922]: I0218 11:57:32.952623 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" event={"ID":"9b28b3ba-c697-4cef-8e3f-e41317e3abe6","Type":"ContainerStarted","Data":"c71bbdea45ab7a402cd8fc37c94e31cbd03659099c218416879906794286646a"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.016408 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" path="/var/lib/kubelet/pods/d61bee1b-0ee4-4c97-8d5d-8655406f124c/volumes" Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.016394 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-b9cbq" podStartSLOduration=3.016354996 podStartE2EDuration="3.016354996s" podCreationTimestamp="2026-02-18 11:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:32.999224133 +0000 UTC m=+1254.726928213" watchObservedRunningTime="2026-02-18 11:57:33.016354996 +0000 UTC m=+1254.744059076" Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.022302 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b9cbq" event={"ID":"cea3a613-3571-4de4-be73-07a4db1c146e","Type":"ContainerStarted","Data":"2ca4584a9bdc48778761e031dee58443f19954bb292722b1e84049c5d0d3891e"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.022454 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b9cbq" event={"ID":"cea3a613-3571-4de4-be73-07a4db1c146e","Type":"ContainerStarted","Data":"73eb578f99941b6e8fcccd4f7146c408046aea775fac188592a56b6aa1c8c60e"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.022521 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-96dc-account-create-update-4px58" event={"ID":"7513cf0a-f653-48b9-a365-9732179aaffc","Type":"ContainerStarted","Data":"64eecc2f7b37277d3eaa046c9a0ed3c760eb621a97a64a32162b927b6a47e4d1"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.022587 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-96dc-account-create-update-4px58" event={"ID":"7513cf0a-f653-48b9-a365-9732179aaffc","Type":"ContainerStarted","Data":"99782e6d55a2e38110d1ce6513a88af301a487264155e726820a08e94b0d1c9f"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.022717 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c19a-account-create-update-24shd" event={"ID":"f9360e33-9ae9-4b84-a898-c2c22626a565","Type":"ContainerStarted","Data":"95ae0db4dc810a397e5536957907573f044ce4062de198f02463dffeab24a900"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.036765 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p2pzf" event={"ID":"157bc07b-77b8-4a29-b8e0-9a205215187b","Type":"ContainerStarted","Data":"6447a128ed7c2ef808d7f125af51fae05848cf0c3a78ee0cef1bb21c2071c85c"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.037066 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p2pzf" event={"ID":"157bc07b-77b8-4a29-b8e0-9a205215187b","Type":"ContainerStarted","Data":"15aa49b2b6d6e10c8c1597a89c8dfd815ac09c35f2814b8e479c59053ff9efa1"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.054128 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-96dc-account-create-update-4px58" podStartSLOduration=3.05410833 podStartE2EDuration="3.05410833s" podCreationTimestamp="2026-02-18 11:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:33.020778647 +0000 UTC m=+1254.748482727" watchObservedRunningTime="2026-02-18 11:57:33.05410833 +0000 UTC m=+1254.781812410" Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.078713 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerStarted","Data":"861840f7d8623349410807b78c7acecc1dbed44761c62e936d5db92618e71c43"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.078891 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="ceilometer-central-agent" containerID="cri-o://4f7ad4cbfa0275aa87aaa646fee7583ae6b1acc937b50ade063f9b7ab68d66a4" gracePeriod=30 Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.079116 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.079136 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="proxy-httpd" containerID="cri-o://861840f7d8623349410807b78c7acecc1dbed44761c62e936d5db92618e71c43" gracePeriod=30 Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.079148 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="sg-core" containerID="cri-o://0426fee054517907fa41850ee4ebf4b625cb2e9874199241df43ce1756e27ff1" gracePeriod=30 Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.079159 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="ceilometer-notification-agent" containerID="cri-o://6840f0d264514efcd94bc4a02aec112cf5f76e7909fa91e4880bd5ff5ad6f1bc" gracePeriod=30 Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.082329 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-p2pzf" podStartSLOduration=3.082303753 podStartE2EDuration="3.082303753s" podCreationTimestamp="2026-02-18 11:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:33.072040853 +0000 UTC m=+1254.799744943" watchObservedRunningTime="2026-02-18 11:57:33.082303753 +0000 UTC m=+1254.810007833" Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.090275 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ddrmz" event={"ID":"41c3abe9-3a81-44ef-babf-818b176f6437","Type":"ContainerStarted","Data":"bf9c600a2ed87d6610ddd93585d70fd608560cee157862758bf35ddf2c2a4754"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.090312 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ddrmz" event={"ID":"41c3abe9-3a81-44ef-babf-818b176f6437","Type":"ContainerStarted","Data":"6b97ca742e23751715b7c18d5f5f62f93230d2dcc1c0f65e7e7422315924cd92"} Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.132968 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.698479248 podStartE2EDuration="7.132948223s" podCreationTimestamp="2026-02-18 11:57:26 +0000 UTC" firstStartedPulling="2026-02-18 11:57:27.502755145 +0000 UTC m=+1249.230459225" lastFinishedPulling="2026-02-18 11:57:31.93722412 +0000 UTC m=+1253.664928200" observedRunningTime="2026-02-18 11:57:33.121924974 +0000 UTC m=+1254.849629054" watchObservedRunningTime="2026-02-18 11:57:33.132948223 +0000 UTC m=+1254.860652303" Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.149759 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-ddrmz" podStartSLOduration=2.149738907 podStartE2EDuration="2.149738907s" podCreationTimestamp="2026-02-18 11:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:33.145977062 +0000 UTC m=+1254.873681162" watchObservedRunningTime="2026-02-18 11:57:33.149738907 +0000 UTC m=+1254.877442987" Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.395334 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.395675 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" containerName="glance-log" containerID="cri-o://0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18" gracePeriod=30 Feb 18 11:57:33 crc kubenswrapper[4922]: I0218 11:57:33.396220 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" containerName="glance-httpd" containerID="cri-o://be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4" gracePeriod=30 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.123006 4922 generic.go:334] "Generic (PLEG): container finished" podID="7513cf0a-f653-48b9-a365-9732179aaffc" containerID="64eecc2f7b37277d3eaa046c9a0ed3c760eb621a97a64a32162b927b6a47e4d1" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.123421 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-96dc-account-create-update-4px58" event={"ID":"7513cf0a-f653-48b9-a365-9732179aaffc","Type":"ContainerDied","Data":"64eecc2f7b37277d3eaa046c9a0ed3c760eb621a97a64a32162b927b6a47e4d1"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.126937 4922 generic.go:334] "Generic (PLEG): container finished" podID="41c3abe9-3a81-44ef-babf-818b176f6437" containerID="bf9c600a2ed87d6610ddd93585d70fd608560cee157862758bf35ddf2c2a4754" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.127016 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ddrmz" event={"ID":"41c3abe9-3a81-44ef-babf-818b176f6437","Type":"ContainerDied","Data":"bf9c600a2ed87d6610ddd93585d70fd608560cee157862758bf35ddf2c2a4754"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.129700 4922 generic.go:334] "Generic (PLEG): container finished" podID="9b28b3ba-c697-4cef-8e3f-e41317e3abe6" containerID="ac0ab0e9aaca817513e97dbed88ce8e6eac29d917cc8fc47fb5c8da1460429d9" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.129772 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" event={"ID":"9b28b3ba-c697-4cef-8e3f-e41317e3abe6","Type":"ContainerDied","Data":"ac0ab0e9aaca817513e97dbed88ce8e6eac29d917cc8fc47fb5c8da1460429d9"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.136641 4922 generic.go:334] "Generic (PLEG): container finished" podID="7810aaca-e072-467b-bba7-6a3e12310c68" containerID="0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18" exitCode=143 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.136938 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7810aaca-e072-467b-bba7-6a3e12310c68","Type":"ContainerDied","Data":"0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.144667 4922 generic.go:334] "Generic (PLEG): container finished" podID="cea3a613-3571-4de4-be73-07a4db1c146e" containerID="2ca4584a9bdc48778761e031dee58443f19954bb292722b1e84049c5d0d3891e" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.144745 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b9cbq" event={"ID":"cea3a613-3571-4de4-be73-07a4db1c146e","Type":"ContainerDied","Data":"2ca4584a9bdc48778761e031dee58443f19954bb292722b1e84049c5d0d3891e"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.179584 4922 generic.go:334] "Generic (PLEG): container finished" podID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerID="861840f7d8623349410807b78c7acecc1dbed44761c62e936d5db92618e71c43" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.179624 4922 generic.go:334] "Generic (PLEG): container finished" podID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerID="0426fee054517907fa41850ee4ebf4b625cb2e9874199241df43ce1756e27ff1" exitCode=2 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.179632 4922 generic.go:334] "Generic (PLEG): container finished" podID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerID="6840f0d264514efcd94bc4a02aec112cf5f76e7909fa91e4880bd5ff5ad6f1bc" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.179732 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerDied","Data":"861840f7d8623349410807b78c7acecc1dbed44761c62e936d5db92618e71c43"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.179763 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerDied","Data":"0426fee054517907fa41850ee4ebf4b625cb2e9874199241df43ce1756e27ff1"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.179778 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerDied","Data":"6840f0d264514efcd94bc4a02aec112cf5f76e7909fa91e4880bd5ff5ad6f1bc"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.189140 4922 generic.go:334] "Generic (PLEG): container finished" podID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerID="fae0743e5e0fb73717e68120a40713dc2bcdd3fa357699afed1bff0f5a1368e5" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.189229 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24eb828d-acb3-4b88-96dc-8d3bb8c49e86","Type":"ContainerDied","Data":"fae0743e5e0fb73717e68120a40713dc2bcdd3fa357699afed1bff0f5a1368e5"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.211196 4922 generic.go:334] "Generic (PLEG): container finished" podID="f9360e33-9ae9-4b84-a898-c2c22626a565" containerID="9eae3101b2310737957f7e6d08c731592c72422d2cd0b2731a1d4e5979cf4d34" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.211278 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c19a-account-create-update-24shd" event={"ID":"f9360e33-9ae9-4b84-a898-c2c22626a565","Type":"ContainerDied","Data":"9eae3101b2310737957f7e6d08c731592c72422d2cd0b2731a1d4e5979cf4d34"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.213526 4922 generic.go:334] "Generic (PLEG): container finished" podID="157bc07b-77b8-4a29-b8e0-9a205215187b" containerID="6447a128ed7c2ef808d7f125af51fae05848cf0c3a78ee0cef1bb21c2071c85c" exitCode=0 Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.213571 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p2pzf" event={"ID":"157bc07b-77b8-4a29-b8e0-9a205215187b","Type":"ContainerDied","Data":"6447a128ed7c2ef808d7f125af51fae05848cf0c3a78ee0cef1bb21c2071c85c"} Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.337033 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.513417 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-httpd-run\") pod \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.513486 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-logs\") pod \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.513512 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-scripts\") pod \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.513622 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-config-data\") pod \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.513659 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-combined-ca-bundle\") pod \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.513690 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.514744 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcrkn\" (UniqueName: \"kubernetes.io/projected/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-kube-api-access-qcrkn\") pod \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.514766 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-internal-tls-certs\") pod \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\" (UID: \"24eb828d-acb3-4b88-96dc-8d3bb8c49e86\") " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.516131 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "24eb828d-acb3-4b88-96dc-8d3bb8c49e86" (UID: "24eb828d-acb3-4b88-96dc-8d3bb8c49e86"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.516283 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-logs" (OuterVolumeSpecName: "logs") pod "24eb828d-acb3-4b88-96dc-8d3bb8c49e86" (UID: "24eb828d-acb3-4b88-96dc-8d3bb8c49e86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.526679 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-scripts" (OuterVolumeSpecName: "scripts") pod "24eb828d-acb3-4b88-96dc-8d3bb8c49e86" (UID: "24eb828d-acb3-4b88-96dc-8d3bb8c49e86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.531423 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-kube-api-access-qcrkn" (OuterVolumeSpecName: "kube-api-access-qcrkn") pod "24eb828d-acb3-4b88-96dc-8d3bb8c49e86" (UID: "24eb828d-acb3-4b88-96dc-8d3bb8c49e86"). InnerVolumeSpecName "kube-api-access-qcrkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.553242 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "24eb828d-acb3-4b88-96dc-8d3bb8c49e86" (UID: "24eb828d-acb3-4b88-96dc-8d3bb8c49e86"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.562495 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24eb828d-acb3-4b88-96dc-8d3bb8c49e86" (UID: "24eb828d-acb3-4b88-96dc-8d3bb8c49e86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.578046 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-config-data" (OuterVolumeSpecName: "config-data") pod "24eb828d-acb3-4b88-96dc-8d3bb8c49e86" (UID: "24eb828d-acb3-4b88-96dc-8d3bb8c49e86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.589700 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "24eb828d-acb3-4b88-96dc-8d3bb8c49e86" (UID: "24eb828d-acb3-4b88-96dc-8d3bb8c49e86"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.618726 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.619069 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcrkn\" (UniqueName: \"kubernetes.io/projected/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-kube-api-access-qcrkn\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.619157 4922 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.619230 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.619302 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.619390 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.619494 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.619579 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24eb828d-acb3-4b88-96dc-8d3bb8c49e86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.638166 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 18 11:57:34 crc kubenswrapper[4922]: I0218 11:57:34.722335 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.223237 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24eb828d-acb3-4b88-96dc-8d3bb8c49e86","Type":"ContainerDied","Data":"205155ab7a59a38e41604dd6477d8795045c94879e29fefe3e7383b4bc42a275"} Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.223307 4922 scope.go:117] "RemoveContainer" containerID="fae0743e5e0fb73717e68120a40713dc2bcdd3fa357699afed1bff0f5a1368e5" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.226541 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.257190 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.263585 4922 scope.go:117] "RemoveContainer" containerID="2c9d001f560c7f93208a3584bc230854257ebcad49ef86ea326e6025425c596a" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.273748 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.289711 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:35 crc kubenswrapper[4922]: E0218 11:57:35.290191 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" containerName="init" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.290205 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" containerName="init" Feb 18 11:57:35 crc kubenswrapper[4922]: E0218 11:57:35.290225 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" containerName="dnsmasq-dns" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.290231 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" containerName="dnsmasq-dns" Feb 18 11:57:35 crc kubenswrapper[4922]: E0218 11:57:35.290243 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerName="glance-httpd" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.290250 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerName="glance-httpd" Feb 18 11:57:35 crc kubenswrapper[4922]: E0218 11:57:35.290270 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerName="glance-log" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.290276 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerName="glance-log" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.290482 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61bee1b-0ee4-4c97-8d5d-8655406f124c" containerName="dnsmasq-dns" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.290496 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerName="glance-httpd" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.290512 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" containerName="glance-log" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.291625 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.294919 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.311091 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.341744 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.437737 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.437791 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.437839 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.437976 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-logs\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.438000 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgznr\" (UniqueName: \"kubernetes.io/projected/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-kube-api-access-fgznr\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.438019 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.438060 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.438230 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.544284 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.544611 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.544638 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.544678 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.544739 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-logs\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.544765 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgznr\" (UniqueName: \"kubernetes.io/projected/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-kube-api-access-fgznr\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.544797 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.544854 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.545379 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.546671 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-logs\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.546853 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.555428 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.555542 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.555552 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.556057 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.577197 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgznr\" (UniqueName: \"kubernetes.io/projected/342c8bfd-c2d6-4afd-b2be-3e1474b63b62-kube-api-access-fgznr\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.625303 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"342c8bfd-c2d6-4afd-b2be-3e1474b63b62\") " pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.770217 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.927188 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.949221 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.954985 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmnpw\" (UniqueName: \"kubernetes.io/projected/157bc07b-77b8-4a29-b8e0-9a205215187b-kube-api-access-mmnpw\") pod \"157bc07b-77b8-4a29-b8e0-9a205215187b\" (UID: \"157bc07b-77b8-4a29-b8e0-9a205215187b\") " Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.955193 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157bc07b-77b8-4a29-b8e0-9a205215187b-operator-scripts\") pod \"157bc07b-77b8-4a29-b8e0-9a205215187b\" (UID: \"157bc07b-77b8-4a29-b8e0-9a205215187b\") " Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.955851 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/157bc07b-77b8-4a29-b8e0-9a205215187b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "157bc07b-77b8-4a29-b8e0-9a205215187b" (UID: "157bc07b-77b8-4a29-b8e0-9a205215187b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.956492 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/157bc07b-77b8-4a29-b8e0-9a205215187b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.963544 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157bc07b-77b8-4a29-b8e0-9a205215187b-kube-api-access-mmnpw" (OuterVolumeSpecName: "kube-api-access-mmnpw") pod "157bc07b-77b8-4a29-b8e0-9a205215187b" (UID: "157bc07b-77b8-4a29-b8e0-9a205215187b"). InnerVolumeSpecName "kube-api-access-mmnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.990843 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:35 crc kubenswrapper[4922]: I0218 11:57:35.999159 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.027508 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.038531 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.059252 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh78w\" (UniqueName: \"kubernetes.io/projected/f9360e33-9ae9-4b84-a898-c2c22626a565-kube-api-access-rh78w\") pod \"f9360e33-9ae9-4b84-a898-c2c22626a565\" (UID: \"f9360e33-9ae9-4b84-a898-c2c22626a565\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.059331 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9360e33-9ae9-4b84-a898-c2c22626a565-operator-scripts\") pod \"f9360e33-9ae9-4b84-a898-c2c22626a565\" (UID: \"f9360e33-9ae9-4b84-a898-c2c22626a565\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.060210 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9360e33-9ae9-4b84-a898-c2c22626a565-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9360e33-9ae9-4b84-a898-c2c22626a565" (UID: "f9360e33-9ae9-4b84-a898-c2c22626a565"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.064602 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmnpw\" (UniqueName: \"kubernetes.io/projected/157bc07b-77b8-4a29-b8e0-9a205215187b-kube-api-access-mmnpw\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.066012 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9360e33-9ae9-4b84-a898-c2c22626a565-kube-api-access-rh78w" (OuterVolumeSpecName: "kube-api-access-rh78w") pod "f9360e33-9ae9-4b84-a898-c2c22626a565" (UID: "f9360e33-9ae9-4b84-a898-c2c22626a565"). InnerVolumeSpecName "kube-api-access-rh78w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.171802 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4f87\" (UniqueName: \"kubernetes.io/projected/41c3abe9-3a81-44ef-babf-818b176f6437-kube-api-access-v4f87\") pod \"41c3abe9-3a81-44ef-babf-818b176f6437\" (UID: \"41c3abe9-3a81-44ef-babf-818b176f6437\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.171863 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9855v\" (UniqueName: \"kubernetes.io/projected/cea3a613-3571-4de4-be73-07a4db1c146e-kube-api-access-9855v\") pod \"cea3a613-3571-4de4-be73-07a4db1c146e\" (UID: \"cea3a613-3571-4de4-be73-07a4db1c146e\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.171987 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxsg\" (UniqueName: \"kubernetes.io/projected/7513cf0a-f653-48b9-a365-9732179aaffc-kube-api-access-vbxsg\") pod \"7513cf0a-f653-48b9-a365-9732179aaffc\" (UID: \"7513cf0a-f653-48b9-a365-9732179aaffc\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.172046 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7513cf0a-f653-48b9-a365-9732179aaffc-operator-scripts\") pod \"7513cf0a-f653-48b9-a365-9732179aaffc\" (UID: \"7513cf0a-f653-48b9-a365-9732179aaffc\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.172109 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c3abe9-3a81-44ef-babf-818b176f6437-operator-scripts\") pod \"41c3abe9-3a81-44ef-babf-818b176f6437\" (UID: \"41c3abe9-3a81-44ef-babf-818b176f6437\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.172187 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-operator-scripts\") pod \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\" (UID: \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.172230 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea3a613-3571-4de4-be73-07a4db1c146e-operator-scripts\") pod \"cea3a613-3571-4de4-be73-07a4db1c146e\" (UID: \"cea3a613-3571-4de4-be73-07a4db1c146e\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.172293 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7dh4\" (UniqueName: \"kubernetes.io/projected/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-kube-api-access-k7dh4\") pod \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\" (UID: \"9b28b3ba-c697-4cef-8e3f-e41317e3abe6\") " Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.172810 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh78w\" (UniqueName: \"kubernetes.io/projected/f9360e33-9ae9-4b84-a898-c2c22626a565-kube-api-access-rh78w\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.172830 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9360e33-9ae9-4b84-a898-c2c22626a565-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.174437 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7513cf0a-f653-48b9-a365-9732179aaffc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7513cf0a-f653-48b9-a365-9732179aaffc" (UID: "7513cf0a-f653-48b9-a365-9732179aaffc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.176059 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c3abe9-3a81-44ef-babf-818b176f6437-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41c3abe9-3a81-44ef-babf-818b176f6437" (UID: "41c3abe9-3a81-44ef-babf-818b176f6437"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.176087 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b28b3ba-c697-4cef-8e3f-e41317e3abe6" (UID: "9b28b3ba-c697-4cef-8e3f-e41317e3abe6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.176484 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea3a613-3571-4de4-be73-07a4db1c146e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cea3a613-3571-4de4-be73-07a4db1c146e" (UID: "cea3a613-3571-4de4-be73-07a4db1c146e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.178769 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea3a613-3571-4de4-be73-07a4db1c146e-kube-api-access-9855v" (OuterVolumeSpecName: "kube-api-access-9855v") pod "cea3a613-3571-4de4-be73-07a4db1c146e" (UID: "cea3a613-3571-4de4-be73-07a4db1c146e"). InnerVolumeSpecName "kube-api-access-9855v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.179102 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-kube-api-access-k7dh4" (OuterVolumeSpecName: "kube-api-access-k7dh4") pod "9b28b3ba-c697-4cef-8e3f-e41317e3abe6" (UID: "9b28b3ba-c697-4cef-8e3f-e41317e3abe6"). InnerVolumeSpecName "kube-api-access-k7dh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.179266 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7513cf0a-f653-48b9-a365-9732179aaffc-kube-api-access-vbxsg" (OuterVolumeSpecName: "kube-api-access-vbxsg") pod "7513cf0a-f653-48b9-a365-9732179aaffc" (UID: "7513cf0a-f653-48b9-a365-9732179aaffc"). InnerVolumeSpecName "kube-api-access-vbxsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.188978 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c3abe9-3a81-44ef-babf-818b176f6437-kube-api-access-v4f87" (OuterVolumeSpecName: "kube-api-access-v4f87") pod "41c3abe9-3a81-44ef-babf-818b176f6437" (UID: "41c3abe9-3a81-44ef-babf-818b176f6437"). InnerVolumeSpecName "kube-api-access-v4f87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.239101 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-96dc-account-create-update-4px58" event={"ID":"7513cf0a-f653-48b9-a365-9732179aaffc","Type":"ContainerDied","Data":"99782e6d55a2e38110d1ce6513a88af301a487264155e726820a08e94b0d1c9f"} Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.239140 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99782e6d55a2e38110d1ce6513a88af301a487264155e726820a08e94b0d1c9f" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.239196 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-96dc-account-create-update-4px58" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.244759 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c19a-account-create-update-24shd" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.245693 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c19a-account-create-update-24shd" event={"ID":"f9360e33-9ae9-4b84-a898-c2c22626a565","Type":"ContainerDied","Data":"95ae0db4dc810a397e5536957907573f044ce4062de198f02463dffeab24a900"} Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.245736 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ae0db4dc810a397e5536957907573f044ce4062de198f02463dffeab24a900" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.247325 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p2pzf" event={"ID":"157bc07b-77b8-4a29-b8e0-9a205215187b","Type":"ContainerDied","Data":"15aa49b2b6d6e10c8c1597a89c8dfd815ac09c35f2814b8e479c59053ff9efa1"} Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.247389 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15aa49b2b6d6e10c8c1597a89c8dfd815ac09c35f2814b8e479c59053ff9efa1" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.247449 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p2pzf" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.257656 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ddrmz" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.257663 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ddrmz" event={"ID":"41c3abe9-3a81-44ef-babf-818b176f6437","Type":"ContainerDied","Data":"6b97ca742e23751715b7c18d5f5f62f93230d2dcc1c0f65e7e7422315924cd92"} Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.257695 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b97ca742e23751715b7c18d5f5f62f93230d2dcc1c0f65e7e7422315924cd92" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.266553 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" event={"ID":"9b28b3ba-c697-4cef-8e3f-e41317e3abe6","Type":"ContainerDied","Data":"c71bbdea45ab7a402cd8fc37c94e31cbd03659099c218416879906794286646a"} Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.266623 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71bbdea45ab7a402cd8fc37c94e31cbd03659099c218416879906794286646a" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.266710 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ea20-account-create-update-k5t5k" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.273283 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-b9cbq" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.273552 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-b9cbq" event={"ID":"cea3a613-3571-4de4-be73-07a4db1c146e","Type":"ContainerDied","Data":"73eb578f99941b6e8fcccd4f7146c408046aea775fac188592a56b6aa1c8c60e"} Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.273615 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73eb578f99941b6e8fcccd4f7146c408046aea775fac188592a56b6aa1c8c60e" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.276568 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.276605 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea3a613-3571-4de4-be73-07a4db1c146e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.276619 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7dh4\" (UniqueName: \"kubernetes.io/projected/9b28b3ba-c697-4cef-8e3f-e41317e3abe6-kube-api-access-k7dh4\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.276635 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4f87\" (UniqueName: \"kubernetes.io/projected/41c3abe9-3a81-44ef-babf-818b176f6437-kube-api-access-v4f87\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.276647 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9855v\" (UniqueName: \"kubernetes.io/projected/cea3a613-3571-4de4-be73-07a4db1c146e-kube-api-access-9855v\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.276661 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxsg\" (UniqueName: \"kubernetes.io/projected/7513cf0a-f653-48b9-a365-9732179aaffc-kube-api-access-vbxsg\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.276674 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7513cf0a-f653-48b9-a365-9732179aaffc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.276686 4922 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41c3abe9-3a81-44ef-babf-818b176f6437-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:36 crc kubenswrapper[4922]: I0218 11:57:36.578481 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 11:57:36 crc kubenswrapper[4922]: W0218 11:57:36.578558 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342c8bfd_c2d6_4afd_b2be_3e1474b63b62.slice/crio-933fa04c15f9dacf8ce6f064b1ad2f6a28db246e3d767262f15546b2719d0cb1 WatchSource:0}: Error finding container 933fa04c15f9dacf8ce6f064b1ad2f6a28db246e3d767262f15546b2719d0cb1: Status 404 returned error can't find the container with id 933fa04c15f9dacf8ce6f064b1ad2f6a28db246e3d767262f15546b2719d0cb1 Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.010204 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24eb828d-acb3-4b88-96dc-8d3bb8c49e86" path="/var/lib/kubelet/pods/24eb828d-acb3-4b88-96dc-8d3bb8c49e86/volumes" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.281179 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.291875 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"342c8bfd-c2d6-4afd-b2be-3e1474b63b62","Type":"ContainerStarted","Data":"933fa04c15f9dacf8ce6f064b1ad2f6a28db246e3d767262f15546b2719d0cb1"} Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.294291 4922 generic.go:334] "Generic (PLEG): container finished" podID="7810aaca-e072-467b-bba7-6a3e12310c68" containerID="be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4" exitCode=0 Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.294330 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7810aaca-e072-467b-bba7-6a3e12310c68","Type":"ContainerDied","Data":"be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4"} Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.294346 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.294353 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7810aaca-e072-467b-bba7-6a3e12310c68","Type":"ContainerDied","Data":"9937aae78bceb48bd4f47887b4b7c1fa9f743a0bf2b9a03c23a054415125619f"} Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.294419 4922 scope.go:117] "RemoveContainer" containerID="be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.344690 4922 scope.go:117] "RemoveContainer" containerID="0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.401331 4922 scope.go:117] "RemoveContainer" containerID="be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.401857 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4\": container with ID starting with be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4 not found: ID does not exist" containerID="be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.401898 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4"} err="failed to get container status \"be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4\": rpc error: code = NotFound desc = could not find container \"be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4\": container with ID starting with be139187f859b824114f17925d841efb955f0e7b83d0a0144a156b5f7b3e4df4 not found: ID does not exist" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.401928 4922 scope.go:117] "RemoveContainer" containerID="0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.402272 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18\": container with ID starting with 0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18 not found: ID does not exist" containerID="0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.402305 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18"} err="failed to get container status \"0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18\": rpc error: code = NotFound desc = could not find container \"0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18\": container with ID starting with 0619ad10d88107a566fb4c0dbd0711c30708a02e901f59a47be08cbb81b88d18 not found: ID does not exist" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.415972 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-httpd-run\") pod \"7810aaca-e072-467b-bba7-6a3e12310c68\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.416016 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-public-tls-certs\") pod \"7810aaca-e072-467b-bba7-6a3e12310c68\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.416078 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-scripts\") pod \"7810aaca-e072-467b-bba7-6a3e12310c68\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.416173 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"7810aaca-e072-467b-bba7-6a3e12310c68\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.416204 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-config-data\") pod \"7810aaca-e072-467b-bba7-6a3e12310c68\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.416232 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp7nr\" (UniqueName: \"kubernetes.io/projected/7810aaca-e072-467b-bba7-6a3e12310c68-kube-api-access-zp7nr\") pod \"7810aaca-e072-467b-bba7-6a3e12310c68\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.416247 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-logs\") pod \"7810aaca-e072-467b-bba7-6a3e12310c68\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.416334 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-combined-ca-bundle\") pod \"7810aaca-e072-467b-bba7-6a3e12310c68\" (UID: \"7810aaca-e072-467b-bba7-6a3e12310c68\") " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.416806 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-logs" (OuterVolumeSpecName: "logs") pod "7810aaca-e072-467b-bba7-6a3e12310c68" (UID: "7810aaca-e072-467b-bba7-6a3e12310c68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.417055 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7810aaca-e072-467b-bba7-6a3e12310c68" (UID: "7810aaca-e072-467b-bba7-6a3e12310c68"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.424440 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-scripts" (OuterVolumeSpecName: "scripts") pod "7810aaca-e072-467b-bba7-6a3e12310c68" (UID: "7810aaca-e072-467b-bba7-6a3e12310c68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.431481 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "7810aaca-e072-467b-bba7-6a3e12310c68" (UID: "7810aaca-e072-467b-bba7-6a3e12310c68"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.433647 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7810aaca-e072-467b-bba7-6a3e12310c68-kube-api-access-zp7nr" (OuterVolumeSpecName: "kube-api-access-zp7nr") pod "7810aaca-e072-467b-bba7-6a3e12310c68" (UID: "7810aaca-e072-467b-bba7-6a3e12310c68"). InnerVolumeSpecName "kube-api-access-zp7nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.509415 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7810aaca-e072-467b-bba7-6a3e12310c68" (UID: "7810aaca-e072-467b-bba7-6a3e12310c68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.522273 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.522320 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp7nr\" (UniqueName: \"kubernetes.io/projected/7810aaca-e072-467b-bba7-6a3e12310c68-kube-api-access-zp7nr\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.522339 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.522351 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.522383 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7810aaca-e072-467b-bba7-6a3e12310c68-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.522398 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.540531 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-config-data" (OuterVolumeSpecName: "config-data") pod "7810aaca-e072-467b-bba7-6a3e12310c68" (UID: "7810aaca-e072-467b-bba7-6a3e12310c68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.572908 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.574474 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7810aaca-e072-467b-bba7-6a3e12310c68" (UID: "7810aaca-e072-467b-bba7-6a3e12310c68"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.624479 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.624514 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.624526 4922 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7810aaca-e072-467b-bba7-6a3e12310c68-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.707317 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.720334 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744126 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.744548 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7513cf0a-f653-48b9-a365-9732179aaffc" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744561 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7513cf0a-f653-48b9-a365-9732179aaffc" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.744576 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9360e33-9ae9-4b84-a898-c2c22626a565" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744582 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9360e33-9ae9-4b84-a898-c2c22626a565" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.744589 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c3abe9-3a81-44ef-babf-818b176f6437" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744595 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c3abe9-3a81-44ef-babf-818b176f6437" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.744613 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" containerName="glance-log" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744620 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" containerName="glance-log" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.744626 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b28b3ba-c697-4cef-8e3f-e41317e3abe6" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744631 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b28b3ba-c697-4cef-8e3f-e41317e3abe6" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.744652 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157bc07b-77b8-4a29-b8e0-9a205215187b" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744658 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="157bc07b-77b8-4a29-b8e0-9a205215187b" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.744675 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" containerName="glance-httpd" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744680 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" containerName="glance-httpd" Feb 18 11:57:37 crc kubenswrapper[4922]: E0218 11:57:37.744687 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea3a613-3571-4de4-be73-07a4db1c146e" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744693 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea3a613-3571-4de4-be73-07a4db1c146e" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744845 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9360e33-9ae9-4b84-a898-c2c22626a565" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744858 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea3a613-3571-4de4-be73-07a4db1c146e" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744868 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c3abe9-3a81-44ef-babf-818b176f6437" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744878 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="157bc07b-77b8-4a29-b8e0-9a205215187b" containerName="mariadb-database-create" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744888 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" containerName="glance-log" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744899 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7513cf0a-f653-48b9-a365-9732179aaffc" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744908 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b28b3ba-c697-4cef-8e3f-e41317e3abe6" containerName="mariadb-account-create-update" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.744919 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" containerName="glance-httpd" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.745874 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.751986 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.753813 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.761474 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.930941 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.931002 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.931073 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5056168-d177-4e40-813a-db20d428ce9a-logs\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.931088 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.931131 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5056168-d177-4e40-813a-db20d428ce9a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.931156 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.931186 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:37 crc kubenswrapper[4922]: I0218 11:57:37.931231 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ghvh\" (UniqueName: \"kubernetes.io/projected/f5056168-d177-4e40-813a-db20d428ce9a-kube-api-access-6ghvh\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.033132 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.033199 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5056168-d177-4e40-813a-db20d428ce9a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.033231 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.033268 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.033313 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ghvh\" (UniqueName: \"kubernetes.io/projected/f5056168-d177-4e40-813a-db20d428ce9a-kube-api-access-6ghvh\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.033342 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.033383 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.033440 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5056168-d177-4e40-813a-db20d428ce9a-logs\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.036255 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5056168-d177-4e40-813a-db20d428ce9a-logs\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.037012 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.037165 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5056168-d177-4e40-813a-db20d428ce9a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.039874 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.043299 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.044022 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.044074 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5056168-d177-4e40-813a-db20d428ce9a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.065687 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ghvh\" (UniqueName: \"kubernetes.io/projected/f5056168-d177-4e40-813a-db20d428ce9a-kube-api-access-6ghvh\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.072525 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f5056168-d177-4e40-813a-db20d428ce9a\") " pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.320582 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"342c8bfd-c2d6-4afd-b2be-3e1474b63b62","Type":"ContainerStarted","Data":"7d799824fdf1a5ef664b15c2397490ac0c1217659ad5a3da53e1b8a625ebf5c6"} Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.320642 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"342c8bfd-c2d6-4afd-b2be-3e1474b63b62","Type":"ContainerStarted","Data":"653f5b3f2a92734abd872bb0b1bf1e7b948a6cc6b5fb9ca31a149128eede3844"} Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.324675 4922 generic.go:334] "Generic (PLEG): container finished" podID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerID="4f7ad4cbfa0275aa87aaa646fee7583ae6b1acc937b50ade063f9b7ab68d66a4" exitCode=0 Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.324723 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerDied","Data":"4f7ad4cbfa0275aa87aaa646fee7583ae6b1acc937b50ade063f9b7ab68d66a4"} Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.350219 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.350197364 podStartE2EDuration="3.350197364s" podCreationTimestamp="2026-02-18 11:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:38.342861658 +0000 UTC m=+1260.070565738" watchObservedRunningTime="2026-02-18 11:57:38.350197364 +0000 UTC m=+1260.077901444" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.364276 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.694095 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.754178 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-log-httpd\") pod \"3ad26da5-56a1-4f67-aae9-ab321499352f\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.754308 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-combined-ca-bundle\") pod \"3ad26da5-56a1-4f67-aae9-ab321499352f\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.754456 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-sg-core-conf-yaml\") pod \"3ad26da5-56a1-4f67-aae9-ab321499352f\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.754495 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-config-data\") pod \"3ad26da5-56a1-4f67-aae9-ab321499352f\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.754552 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-run-httpd\") pod \"3ad26da5-56a1-4f67-aae9-ab321499352f\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.754627 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26464\" (UniqueName: \"kubernetes.io/projected/3ad26da5-56a1-4f67-aae9-ab321499352f-kube-api-access-26464\") pod \"3ad26da5-56a1-4f67-aae9-ab321499352f\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.754697 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-scripts\") pod \"3ad26da5-56a1-4f67-aae9-ab321499352f\" (UID: \"3ad26da5-56a1-4f67-aae9-ab321499352f\") " Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.761852 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ad26da5-56a1-4f67-aae9-ab321499352f" (UID: "3ad26da5-56a1-4f67-aae9-ab321499352f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.761997 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ad26da5-56a1-4f67-aae9-ab321499352f" (UID: "3ad26da5-56a1-4f67-aae9-ab321499352f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.775909 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad26da5-56a1-4f67-aae9-ab321499352f-kube-api-access-26464" (OuterVolumeSpecName: "kube-api-access-26464") pod "3ad26da5-56a1-4f67-aae9-ab321499352f" (UID: "3ad26da5-56a1-4f67-aae9-ab321499352f"). InnerVolumeSpecName "kube-api-access-26464". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.776750 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-scripts" (OuterVolumeSpecName: "scripts") pod "3ad26da5-56a1-4f67-aae9-ab321499352f" (UID: "3ad26da5-56a1-4f67-aae9-ab321499352f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.814604 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ad26da5-56a1-4f67-aae9-ab321499352f" (UID: "3ad26da5-56a1-4f67-aae9-ab321499352f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.860534 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.860607 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.860733 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26464\" (UniqueName: \"kubernetes.io/projected/3ad26da5-56a1-4f67-aae9-ab321499352f-kube-api-access-26464\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.860751 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.860763 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ad26da5-56a1-4f67-aae9-ab321499352f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.900844 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ad26da5-56a1-4f67-aae9-ab321499352f" (UID: "3ad26da5-56a1-4f67-aae9-ab321499352f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.914292 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-config-data" (OuterVolumeSpecName: "config-data") pod "3ad26da5-56a1-4f67-aae9-ab321499352f" (UID: "3ad26da5-56a1-4f67-aae9-ab321499352f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.963943 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.963983 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26da5-56a1-4f67-aae9-ab321499352f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:38 crc kubenswrapper[4922]: I0218 11:57:38.989174 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7810aaca-e072-467b-bba7-6a3e12310c68" path="/var/lib/kubelet/pods/7810aaca-e072-467b-bba7-6a3e12310c68/volumes" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.024508 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 11:57:39 crc kubenswrapper[4922]: W0218 11:57:39.026011 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5056168_d177_4e40_813a_db20d428ce9a.slice/crio-cbc622c3c7d1ad91b0ac3fa62385fdfc6fb82690de70ed799bc25da48755f0c7 WatchSource:0}: Error finding container cbc622c3c7d1ad91b0ac3fa62385fdfc6fb82690de70ed799bc25da48755f0c7: Status 404 returned error can't find the container with id cbc622c3c7d1ad91b0ac3fa62385fdfc6fb82690de70ed799bc25da48755f0c7 Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.339507 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5056168-d177-4e40-813a-db20d428ce9a","Type":"ContainerStarted","Data":"cbc622c3c7d1ad91b0ac3fa62385fdfc6fb82690de70ed799bc25da48755f0c7"} Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.342599 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.343480 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3ad26da5-56a1-4f67-aae9-ab321499352f","Type":"ContainerDied","Data":"4ab86a79e660c19ee3ee370625c973f496ff01d2184f2823cb5772f2dd377459"} Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.343525 4922 scope.go:117] "RemoveContainer" containerID="861840f7d8623349410807b78c7acecc1dbed44761c62e936d5db92618e71c43" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.367845 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.377300 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.391992 4922 scope.go:117] "RemoveContainer" containerID="0426fee054517907fa41850ee4ebf4b625cb2e9874199241df43ce1756e27ff1" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.410987 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:39 crc kubenswrapper[4922]: E0218 11:57:39.411942 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="proxy-httpd" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.411967 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="proxy-httpd" Feb 18 11:57:39 crc kubenswrapper[4922]: E0218 11:57:39.411989 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="ceilometer-notification-agent" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.411997 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="ceilometer-notification-agent" Feb 18 11:57:39 crc kubenswrapper[4922]: E0218 11:57:39.412012 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="ceilometer-central-agent" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.412020 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="ceilometer-central-agent" Feb 18 11:57:39 crc kubenswrapper[4922]: E0218 11:57:39.412049 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="sg-core" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.412056 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="sg-core" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.412261 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="ceilometer-notification-agent" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.412283 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="ceilometer-central-agent" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.412299 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="sg-core" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.412315 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" containerName="proxy-httpd" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.418604 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.425138 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.425424 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.438989 4922 scope.go:117] "RemoveContainer" containerID="6840f0d264514efcd94bc4a02aec112cf5f76e7909fa91e4880bd5ff5ad6f1bc" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.451118 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.474277 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-config-data\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.474420 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.474500 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.474534 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.474616 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.474964 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-scripts\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.475130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6h9k\" (UniqueName: \"kubernetes.io/projected/21959345-b5c7-4013-a975-3d02790d2e8a-kube-api-access-b6h9k\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.490128 4922 scope.go:117] "RemoveContainer" containerID="4f7ad4cbfa0275aa87aaa646fee7583ae6b1acc937b50ade063f9b7ab68d66a4" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.577859 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.578248 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.578327 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.578390 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.578421 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-scripts\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.578468 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6h9k\" (UniqueName: \"kubernetes.io/projected/21959345-b5c7-4013-a975-3d02790d2e8a-kube-api-access-b6h9k\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.578593 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-config-data\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.578810 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.579122 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.587194 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.593950 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.598314 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-scripts\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.599754 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6h9k\" (UniqueName: \"kubernetes.io/projected/21959345-b5c7-4013-a975-3d02790d2e8a-kube-api-access-b6h9k\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.600175 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-config-data\") pod \"ceilometer-0\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.759945 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.808826 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.808896 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.808965 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.809711 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ab0358a6b4b84604aa8265da97113127295fc06806ed22c39a69885110c93fc"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 11:57:39 crc kubenswrapper[4922]: I0218 11:57:39.809777 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://6ab0358a6b4b84604aa8265da97113127295fc06806ed22c39a69885110c93fc" gracePeriod=600 Feb 18 11:57:40 crc kubenswrapper[4922]: I0218 11:57:40.246046 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:40 crc kubenswrapper[4922]: W0218 11:57:40.255771 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21959345_b5c7_4013_a975_3d02790d2e8a.slice/crio-65b652de66b9a52f98793fd9afbf09e548d09526752352ff7c8d2022f8399ea5 WatchSource:0}: Error finding container 65b652de66b9a52f98793fd9afbf09e548d09526752352ff7c8d2022f8399ea5: Status 404 returned error can't find the container with id 65b652de66b9a52f98793fd9afbf09e548d09526752352ff7c8d2022f8399ea5 Feb 18 11:57:40 crc kubenswrapper[4922]: I0218 11:57:40.396142 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerStarted","Data":"65b652de66b9a52f98793fd9afbf09e548d09526752352ff7c8d2022f8399ea5"} Feb 18 11:57:40 crc kubenswrapper[4922]: I0218 11:57:40.401258 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="6ab0358a6b4b84604aa8265da97113127295fc06806ed22c39a69885110c93fc" exitCode=0 Feb 18 11:57:40 crc kubenswrapper[4922]: I0218 11:57:40.401320 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"6ab0358a6b4b84604aa8265da97113127295fc06806ed22c39a69885110c93fc"} Feb 18 11:57:40 crc kubenswrapper[4922]: I0218 11:57:40.401347 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"ef7998df4ff2ac956dafb01bf87962d308e4ed2ee7ceb57165a1d59bde7c799c"} Feb 18 11:57:40 crc kubenswrapper[4922]: I0218 11:57:40.401379 4922 scope.go:117] "RemoveContainer" containerID="3ecb5c1316b12312a3aa69dd6c259c33fc3f21c1c782e1e74d55d4b725fb05a8" Feb 18 11:57:40 crc kubenswrapper[4922]: I0218 11:57:40.406862 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5056168-d177-4e40-813a-db20d428ce9a","Type":"ContainerStarted","Data":"90371bd96ab264a9bb82ffb0fc0dd87731cc01ffcabe85aa1f861569e31349f5"} Feb 18 11:57:40 crc kubenswrapper[4922]: I0218 11:57:40.991643 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad26da5-56a1-4f67-aae9-ab321499352f" path="/var/lib/kubelet/pods/3ad26da5-56a1-4f67-aae9-ab321499352f/volumes" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.450636 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5056168-d177-4e40-813a-db20d428ce9a","Type":"ContainerStarted","Data":"181d13923e340b31677d3b88a3afe9db22758e34cda4decd2bdf335c4bfb32bc"} Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.473690 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerStarted","Data":"cb478ec359f29d9399971270b2b6115201780be8066b8577d1cced1b5e9c1a76"} Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.721677 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.72165909 podStartE2EDuration="4.72165909s" podCreationTimestamp="2026-02-18 11:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:57:40.45839042 +0000 UTC m=+1262.186094500" watchObservedRunningTime="2026-02-18 11:57:41.72165909 +0000 UTC m=+1263.449363180" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.732397 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ws7m8"] Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.744189 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ws7m8"] Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.744292 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.749016 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m8flj" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.749177 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.749320 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.926473 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.926845 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdr6h\" (UniqueName: \"kubernetes.io/projected/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-kube-api-access-hdr6h\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.926977 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-scripts\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:41 crc kubenswrapper[4922]: I0218 11:57:41.927011 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-config-data\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.029026 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdr6h\" (UniqueName: \"kubernetes.io/projected/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-kube-api-access-hdr6h\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.029171 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-scripts\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.029200 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-config-data\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.029313 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.037271 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.046177 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-scripts\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.047321 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdr6h\" (UniqueName: \"kubernetes.io/projected/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-kube-api-access-hdr6h\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.050967 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-config-data\") pod \"nova-cell0-conductor-db-sync-ws7m8\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.067204 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.500546 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerStarted","Data":"001d92f56b3881bb246a5b8d6020a874a2e98a7c7721ad75c44399ad7e7ed2db"} Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.501193 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerStarted","Data":"99f8fed10436d49f464412fa00af054c4cb5c5d4c102938ed7110cfe0fb2e1b2"} Feb 18 11:57:42 crc kubenswrapper[4922]: I0218 11:57:42.616400 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ws7m8"] Feb 18 11:57:43 crc kubenswrapper[4922]: I0218 11:57:43.553870 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" event={"ID":"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5","Type":"ContainerStarted","Data":"19238151c6895845aeb1fcd79101b75a8cace32e850eb034480705b42b7e45a0"} Feb 18 11:57:44 crc kubenswrapper[4922]: I0218 11:57:44.568977 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerStarted","Data":"c0108bf2b931e5acca4c64b4090bbf10772b51b4c02d9b81008189353130e77b"} Feb 18 11:57:44 crc kubenswrapper[4922]: I0218 11:57:44.569302 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 11:57:44 crc kubenswrapper[4922]: I0218 11:57:44.603127 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.748672096 podStartE2EDuration="5.603107561s" podCreationTimestamp="2026-02-18 11:57:39 +0000 UTC" firstStartedPulling="2026-02-18 11:57:40.266965881 +0000 UTC m=+1261.994669961" lastFinishedPulling="2026-02-18 11:57:44.121401346 +0000 UTC m=+1265.849105426" observedRunningTime="2026-02-18 11:57:44.595174281 +0000 UTC m=+1266.322878371" watchObservedRunningTime="2026-02-18 11:57:44.603107561 +0000 UTC m=+1266.330811641" Feb 18 11:57:45 crc kubenswrapper[4922]: I0218 11:57:45.928600 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:45 crc kubenswrapper[4922]: I0218 11:57:45.928914 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:45 crc kubenswrapper[4922]: I0218 11:57:45.987349 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:45 crc kubenswrapper[4922]: I0218 11:57:45.988515 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:46 crc kubenswrapper[4922]: I0218 11:57:46.591975 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:46 crc kubenswrapper[4922]: I0218 11:57:46.592025 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.365027 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.365391 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.396533 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.409119 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.619028 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.619311 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.899911 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.900027 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:57:48 crc kubenswrapper[4922]: I0218 11:57:48.952439 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 11:57:49 crc kubenswrapper[4922]: I0218 11:57:49.966660 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:57:49 crc kubenswrapper[4922]: I0218 11:57:49.968013 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="bdb6fddf-10f2-476b-822f-130f6fa12007" containerName="watcher-decision-engine" containerID="cri-o://bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115" gracePeriod=30 Feb 18 11:57:50 crc kubenswrapper[4922]: I0218 11:57:50.588113 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:57:51 crc kubenswrapper[4922]: I0218 11:57:51.207326 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 11:57:51 crc kubenswrapper[4922]: I0218 11:57:51.207446 4922 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:57:51 crc kubenswrapper[4922]: I0218 11:57:51.257962 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.038970 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.039720 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="ceilometer-central-agent" containerID="cri-o://cb478ec359f29d9399971270b2b6115201780be8066b8577d1cced1b5e9c1a76" gracePeriod=30 Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.039807 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="sg-core" containerID="cri-o://001d92f56b3881bb246a5b8d6020a874a2e98a7c7721ad75c44399ad7e7ed2db" gracePeriod=30 Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.039881 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="ceilometer-notification-agent" containerID="cri-o://99f8fed10436d49f464412fa00af054c4cb5c5d4c102938ed7110cfe0fb2e1b2" gracePeriod=30 Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.039842 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="proxy-httpd" containerID="cri-o://c0108bf2b931e5acca4c64b4090bbf10772b51b4c02d9b81008189353130e77b" gracePeriod=30 Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.681235 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" event={"ID":"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5","Type":"ContainerStarted","Data":"38158e702c80c7a31868549296123efd8bf7bfe465aae6ab472768cf855c3a38"} Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.683941 4922 generic.go:334] "Generic (PLEG): container finished" podID="21959345-b5c7-4013-a975-3d02790d2e8a" containerID="c0108bf2b931e5acca4c64b4090bbf10772b51b4c02d9b81008189353130e77b" exitCode=0 Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.683976 4922 generic.go:334] "Generic (PLEG): container finished" podID="21959345-b5c7-4013-a975-3d02790d2e8a" containerID="001d92f56b3881bb246a5b8d6020a874a2e98a7c7721ad75c44399ad7e7ed2db" exitCode=2 Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.684000 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerDied","Data":"c0108bf2b931e5acca4c64b4090bbf10772b51b4c02d9b81008189353130e77b"} Feb 18 11:57:52 crc kubenswrapper[4922]: I0218 11:57:52.684030 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerDied","Data":"001d92f56b3881bb246a5b8d6020a874a2e98a7c7721ad75c44399ad7e7ed2db"} Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.241598 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f57669c89-7wt5g" Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.271120 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" podStartSLOduration=3.105033463 podStartE2EDuration="12.271101443s" podCreationTimestamp="2026-02-18 11:57:41 +0000 UTC" firstStartedPulling="2026-02-18 11:57:42.627388014 +0000 UTC m=+1264.355092094" lastFinishedPulling="2026-02-18 11:57:51.793455984 +0000 UTC m=+1273.521160074" observedRunningTime="2026-02-18 11:57:52.703833164 +0000 UTC m=+1274.431537244" watchObservedRunningTime="2026-02-18 11:57:53.271101443 +0000 UTC m=+1274.998805523" Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.320329 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-979b8465b-gmztk"] Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.320977 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-979b8465b-gmztk" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerName="neutron-api" containerID="cri-o://c50b1f228a346e798dee756ffe7172e5752cdc2bb9dd07e1f2ab8318d4e33a78" gracePeriod=30 Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.321425 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-979b8465b-gmztk" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerName="neutron-httpd" containerID="cri-o://dd50276aac03478ea25adcde79bcd1337f3694428dc64ce90dcf6fe92e773c3c" gracePeriod=30 Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.702693 4922 generic.go:334] "Generic (PLEG): container finished" podID="21959345-b5c7-4013-a975-3d02790d2e8a" containerID="99f8fed10436d49f464412fa00af054c4cb5c5d4c102938ed7110cfe0fb2e1b2" exitCode=0 Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.702734 4922 generic.go:334] "Generic (PLEG): container finished" podID="21959345-b5c7-4013-a975-3d02790d2e8a" containerID="cb478ec359f29d9399971270b2b6115201780be8066b8577d1cced1b5e9c1a76" exitCode=0 Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.702774 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerDied","Data":"99f8fed10436d49f464412fa00af054c4cb5c5d4c102938ed7110cfe0fb2e1b2"} Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.702831 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerDied","Data":"cb478ec359f29d9399971270b2b6115201780be8066b8577d1cced1b5e9c1a76"} Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.708527 4922 generic.go:334] "Generic (PLEG): container finished" podID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerID="dd50276aac03478ea25adcde79bcd1337f3694428dc64ce90dcf6fe92e773c3c" exitCode=0 Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.709650 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979b8465b-gmztk" event={"ID":"6b1ea57e-dcf2-4e47-8650-af483b18ea8f","Type":"ContainerDied","Data":"dd50276aac03478ea25adcde79bcd1337f3694428dc64ce90dcf6fe92e773c3c"} Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.854160 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.905768 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-sg-core-conf-yaml\") pod \"21959345-b5c7-4013-a975-3d02790d2e8a\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.905852 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-config-data\") pod \"21959345-b5c7-4013-a975-3d02790d2e8a\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.905970 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6h9k\" (UniqueName: \"kubernetes.io/projected/21959345-b5c7-4013-a975-3d02790d2e8a-kube-api-access-b6h9k\") pod \"21959345-b5c7-4013-a975-3d02790d2e8a\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.906009 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-combined-ca-bundle\") pod \"21959345-b5c7-4013-a975-3d02790d2e8a\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.906113 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-run-httpd\") pod \"21959345-b5c7-4013-a975-3d02790d2e8a\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.906186 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-scripts\") pod \"21959345-b5c7-4013-a975-3d02790d2e8a\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.906217 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-log-httpd\") pod \"21959345-b5c7-4013-a975-3d02790d2e8a\" (UID: \"21959345-b5c7-4013-a975-3d02790d2e8a\") " Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.906506 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "21959345-b5c7-4013-a975-3d02790d2e8a" (UID: "21959345-b5c7-4013-a975-3d02790d2e8a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.906757 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.906875 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "21959345-b5c7-4013-a975-3d02790d2e8a" (UID: "21959345-b5c7-4013-a975-3d02790d2e8a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.914762 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-scripts" (OuterVolumeSpecName: "scripts") pod "21959345-b5c7-4013-a975-3d02790d2e8a" (UID: "21959345-b5c7-4013-a975-3d02790d2e8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.928634 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21959345-b5c7-4013-a975-3d02790d2e8a-kube-api-access-b6h9k" (OuterVolumeSpecName: "kube-api-access-b6h9k") pod "21959345-b5c7-4013-a975-3d02790d2e8a" (UID: "21959345-b5c7-4013-a975-3d02790d2e8a"). InnerVolumeSpecName "kube-api-access-b6h9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:53 crc kubenswrapper[4922]: I0218 11:57:53.952526 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "21959345-b5c7-4013-a975-3d02790d2e8a" (UID: "21959345-b5c7-4013-a975-3d02790d2e8a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.009897 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.009937 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21959345-b5c7-4013-a975-3d02790d2e8a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.009950 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.009962 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6h9k\" (UniqueName: \"kubernetes.io/projected/21959345-b5c7-4013-a975-3d02790d2e8a-kube-api-access-b6h9k\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.044462 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-config-data" (OuterVolumeSpecName: "config-data") pod "21959345-b5c7-4013-a975-3d02790d2e8a" (UID: "21959345-b5c7-4013-a975-3d02790d2e8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.053521 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21959345-b5c7-4013-a975-3d02790d2e8a" (UID: "21959345-b5c7-4013-a975-3d02790d2e8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.113039 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.113076 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21959345-b5c7-4013-a975-3d02790d2e8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.722601 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21959345-b5c7-4013-a975-3d02790d2e8a","Type":"ContainerDied","Data":"65b652de66b9a52f98793fd9afbf09e548d09526752352ff7c8d2022f8399ea5"} Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.722652 4922 scope.go:117] "RemoveContainer" containerID="c0108bf2b931e5acca4c64b4090bbf10772b51b4c02d9b81008189353130e77b" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.722717 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.757199 4922 scope.go:117] "RemoveContainer" containerID="001d92f56b3881bb246a5b8d6020a874a2e98a7c7721ad75c44399ad7e7ed2db" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.767767 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.788800 4922 scope.go:117] "RemoveContainer" containerID="99f8fed10436d49f464412fa00af054c4cb5c5d4c102938ed7110cfe0fb2e1b2" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.792017 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.809496 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:54 crc kubenswrapper[4922]: E0218 11:57:54.810362 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="proxy-httpd" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.810402 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="proxy-httpd" Feb 18 11:57:54 crc kubenswrapper[4922]: E0218 11:57:54.810423 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="ceilometer-central-agent" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.810431 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="ceilometer-central-agent" Feb 18 11:57:54 crc kubenswrapper[4922]: E0218 11:57:54.810446 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="ceilometer-notification-agent" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.810455 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="ceilometer-notification-agent" Feb 18 11:57:54 crc kubenswrapper[4922]: E0218 11:57:54.810464 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="sg-core" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.810471 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="sg-core" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.810707 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="proxy-httpd" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.810724 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="ceilometer-notification-agent" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.810744 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="ceilometer-central-agent" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.810757 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" containerName="sg-core" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.812402 4922 scope.go:117] "RemoveContainer" containerID="cb478ec359f29d9399971270b2b6115201780be8066b8577d1cced1b5e9c1a76" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.815083 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.818902 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.818952 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.837630 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.934573 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.934616 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-config-data\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.934635 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.934709 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-scripts\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.934928 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcct\" (UniqueName: \"kubernetes.io/projected/01766e5f-d149-4175-9fdb-15e65b0e0665-kube-api-access-bkcct\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.934999 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-log-httpd\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.935046 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-run-httpd\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:54 crc kubenswrapper[4922]: I0218 11:57:54.986272 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21959345-b5c7-4013-a975-3d02790d2e8a" path="/var/lib/kubelet/pods/21959345-b5c7-4013-a975-3d02790d2e8a/volumes" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.037663 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkcct\" (UniqueName: \"kubernetes.io/projected/01766e5f-d149-4175-9fdb-15e65b0e0665-kube-api-access-bkcct\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.037842 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-log-httpd\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.037900 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-run-httpd\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.037992 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.038119 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-config-data\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.038149 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.038369 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-scripts\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.038736 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-log-httpd\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.039388 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-run-httpd\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.042478 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.043207 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-scripts\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.043288 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.052884 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-config-data\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.063895 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkcct\" (UniqueName: \"kubernetes.io/projected/01766e5f-d149-4175-9fdb-15e65b0e0665-kube-api-access-bkcct\") pod \"ceilometer-0\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.138901 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:57:55 crc kubenswrapper[4922]: W0218 11:57:55.595342 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01766e5f_d149_4175_9fdb_15e65b0e0665.slice/crio-9caa36cbf4a061073a94710aba88a19e4ceb523f9ac965d8c82018ec849187a8 WatchSource:0}: Error finding container 9caa36cbf4a061073a94710aba88a19e4ceb523f9ac965d8c82018ec849187a8: Status 404 returned error can't find the container with id 9caa36cbf4a061073a94710aba88a19e4ceb523f9ac965d8c82018ec849187a8 Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.599131 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:55 crc kubenswrapper[4922]: I0218 11:57:55.733438 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerStarted","Data":"9caa36cbf4a061073a94710aba88a19e4ceb523f9ac965d8c82018ec849187a8"} Feb 18 11:57:56 crc kubenswrapper[4922]: I0218 11:57:56.765642 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerStarted","Data":"6c511819319a5fc193d0f75ed63d9083c79d2c12407691912b456da48212acab"} Feb 18 11:57:57 crc kubenswrapper[4922]: I0218 11:57:57.776090 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerStarted","Data":"d7cb0d481469976e2f0d27c45a95752ec61c6325990c025f51f551cbf42c3d51"} Feb 18 11:57:58 crc kubenswrapper[4922]: I0218 11:57:58.788921 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerStarted","Data":"019a6342d78a17e9007569b8df388ae4cd83074ceff02bfe55eb5c3a71054609"} Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.265296 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.616045 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.741592 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-custom-prometheus-ca\") pod \"bdb6fddf-10f2-476b-822f-130f6fa12007\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.741689 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff6wh\" (UniqueName: \"kubernetes.io/projected/bdb6fddf-10f2-476b-822f-130f6fa12007-kube-api-access-ff6wh\") pod \"bdb6fddf-10f2-476b-822f-130f6fa12007\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.741791 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb6fddf-10f2-476b-822f-130f6fa12007-logs\") pod \"bdb6fddf-10f2-476b-822f-130f6fa12007\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.741870 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-combined-ca-bundle\") pod \"bdb6fddf-10f2-476b-822f-130f6fa12007\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.742148 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-config-data\") pod \"bdb6fddf-10f2-476b-822f-130f6fa12007\" (UID: \"bdb6fddf-10f2-476b-822f-130f6fa12007\") " Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.742850 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdb6fddf-10f2-476b-822f-130f6fa12007-logs" (OuterVolumeSpecName: "logs") pod "bdb6fddf-10f2-476b-822f-130f6fa12007" (UID: "bdb6fddf-10f2-476b-822f-130f6fa12007"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.748641 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb6fddf-10f2-476b-822f-130f6fa12007-kube-api-access-ff6wh" (OuterVolumeSpecName: "kube-api-access-ff6wh") pod "bdb6fddf-10f2-476b-822f-130f6fa12007" (UID: "bdb6fddf-10f2-476b-822f-130f6fa12007"). InnerVolumeSpecName "kube-api-access-ff6wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.796556 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdb6fddf-10f2-476b-822f-130f6fa12007" (UID: "bdb6fddf-10f2-476b-822f-130f6fa12007"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.818865 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bdb6fddf-10f2-476b-822f-130f6fa12007" (UID: "bdb6fddf-10f2-476b-822f-130f6fa12007"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.827677 4922 generic.go:334] "Generic (PLEG): container finished" podID="bdb6fddf-10f2-476b-822f-130f6fa12007" containerID="bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115" exitCode=0 Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.827782 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"bdb6fddf-10f2-476b-822f-130f6fa12007","Type":"ContainerDied","Data":"bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115"} Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.827852 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"bdb6fddf-10f2-476b-822f-130f6fa12007","Type":"ContainerDied","Data":"767ebbd1c5208f481e0a0c9a07d1e2942ae4da643c6fc17067643a26968c3ac5"} Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.827874 4922 scope.go:117] "RemoveContainer" containerID="bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.828069 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.828176 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-config-data" (OuterVolumeSpecName: "config-data") pod "bdb6fddf-10f2-476b-822f-130f6fa12007" (UID: "bdb6fddf-10f2-476b-822f-130f6fa12007"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.836350 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerStarted","Data":"ede12ae546b38989e02144d9c29b45bfe17c0490fbfcd99cc1a79c54dc349009"} Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.836625 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="ceilometer-central-agent" containerID="cri-o://6c511819319a5fc193d0f75ed63d9083c79d2c12407691912b456da48212acab" gracePeriod=30 Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.837046 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.837512 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="proxy-httpd" containerID="cri-o://ede12ae546b38989e02144d9c29b45bfe17c0490fbfcd99cc1a79c54dc349009" gracePeriod=30 Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.837593 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="sg-core" containerID="cri-o://019a6342d78a17e9007569b8df388ae4cd83074ceff02bfe55eb5c3a71054609" gracePeriod=30 Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.837655 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="ceilometer-notification-agent" containerID="cri-o://d7cb0d481469976e2f0d27c45a95752ec61c6325990c025f51f551cbf42c3d51" gracePeriod=30 Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.845079 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.845114 4922 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.845125 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff6wh\" (UniqueName: \"kubernetes.io/projected/bdb6fddf-10f2-476b-822f-130f6fa12007-kube-api-access-ff6wh\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.845147 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb6fddf-10f2-476b-822f-130f6fa12007-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.845159 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb6fddf-10f2-476b-822f-130f6fa12007-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.867698 4922 scope.go:117] "RemoveContainer" containerID="bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115" Feb 18 11:57:59 crc kubenswrapper[4922]: E0218 11:57:59.868914 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115\": container with ID starting with bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115 not found: ID does not exist" containerID="bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.868975 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115"} err="failed to get container status \"bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115\": rpc error: code = NotFound desc = could not find container \"bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115\": container with ID starting with bc2ff599d529126d7879e500e35b5132d881d5c58dbce8cd92a858ece8e6e115 not found: ID does not exist" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.870199 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.006449051 podStartE2EDuration="5.87018607s" podCreationTimestamp="2026-02-18 11:57:54 +0000 UTC" firstStartedPulling="2026-02-18 11:57:55.598328735 +0000 UTC m=+1277.326032815" lastFinishedPulling="2026-02-18 11:57:59.462065754 +0000 UTC m=+1281.189769834" observedRunningTime="2026-02-18 11:57:59.862795613 +0000 UTC m=+1281.590499693" watchObservedRunningTime="2026-02-18 11:57:59.87018607 +0000 UTC m=+1281.597890150" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.892402 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.906957 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.918808 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:57:59 crc kubenswrapper[4922]: E0218 11:57:59.919258 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb6fddf-10f2-476b-822f-130f6fa12007" containerName="watcher-decision-engine" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.919272 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb6fddf-10f2-476b-822f-130f6fa12007" containerName="watcher-decision-engine" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.919474 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb6fddf-10f2-476b-822f-130f6fa12007" containerName="watcher-decision-engine" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.921072 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.935833 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:57:59 crc kubenswrapper[4922]: I0218 11:57:59.936680 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.058391 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stxjq\" (UniqueName: \"kubernetes.io/projected/3df41ae7-b237-49e2-902c-f33e693f5db9-kube-api-access-stxjq\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.058452 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.058563 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.058612 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df41ae7-b237-49e2-902c-f33e693f5db9-logs\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.058673 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.160692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df41ae7-b237-49e2-902c-f33e693f5db9-logs\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.160998 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.161285 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df41ae7-b237-49e2-902c-f33e693f5db9-logs\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.161721 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stxjq\" (UniqueName: \"kubernetes.io/projected/3df41ae7-b237-49e2-902c-f33e693f5db9-kube-api-access-stxjq\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.161754 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.161811 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.165073 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.166128 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.168159 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df41ae7-b237-49e2-902c-f33e693f5db9-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.181804 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stxjq\" (UniqueName: \"kubernetes.io/projected/3df41ae7-b237-49e2-902c-f33e693f5db9-kube-api-access-stxjq\") pod \"watcher-decision-engine-0\" (UID: \"3df41ae7-b237-49e2-902c-f33e693f5db9\") " pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.247466 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.742865 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.846569 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3df41ae7-b237-49e2-902c-f33e693f5db9","Type":"ContainerStarted","Data":"d6a4a9c5612986b7024445846c98908756d70164a4c3727278455ddfd9a50fed"} Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.850673 4922 generic.go:334] "Generic (PLEG): container finished" podID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerID="019a6342d78a17e9007569b8df388ae4cd83074ceff02bfe55eb5c3a71054609" exitCode=2 Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.850703 4922 generic.go:334] "Generic (PLEG): container finished" podID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerID="d7cb0d481469976e2f0d27c45a95752ec61c6325990c025f51f551cbf42c3d51" exitCode=0 Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.850709 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerDied","Data":"019a6342d78a17e9007569b8df388ae4cd83074ceff02bfe55eb5c3a71054609"} Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.850752 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerDied","Data":"d7cb0d481469976e2f0d27c45a95752ec61c6325990c025f51f551cbf42c3d51"} Feb 18 11:58:00 crc kubenswrapper[4922]: I0218 11:58:00.986152 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdb6fddf-10f2-476b-822f-130f6fa12007" path="/var/lib/kubelet/pods/bdb6fddf-10f2-476b-822f-130f6fa12007/volumes" Feb 18 11:58:01 crc kubenswrapper[4922]: I0218 11:58:01.862288 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3df41ae7-b237-49e2-902c-f33e693f5db9","Type":"ContainerStarted","Data":"87903c73d53186adbab4fd2f241af4eb9eed10c5c887c5c9f491c04133c21f0b"} Feb 18 11:58:01 crc kubenswrapper[4922]: I0218 11:58:01.884393 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.884354529 podStartE2EDuration="2.884354529s" podCreationTimestamp="2026-02-18 11:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:01.879586759 +0000 UTC m=+1283.607290839" watchObservedRunningTime="2026-02-18 11:58:01.884354529 +0000 UTC m=+1283.612058609" Feb 18 11:58:02 crc kubenswrapper[4922]: I0218 11:58:02.873965 4922 generic.go:334] "Generic (PLEG): container finished" podID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerID="c50b1f228a346e798dee756ffe7172e5752cdc2bb9dd07e1f2ab8318d4e33a78" exitCode=0 Feb 18 11:58:02 crc kubenswrapper[4922]: I0218 11:58:02.874050 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979b8465b-gmztk" event={"ID":"6b1ea57e-dcf2-4e47-8650-af483b18ea8f","Type":"ContainerDied","Data":"c50b1f228a346e798dee756ffe7172e5752cdc2bb9dd07e1f2ab8318d4e33a78"} Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.409662 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.524142 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjthv\" (UniqueName: \"kubernetes.io/projected/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-kube-api-access-vjthv\") pod \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.524227 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-combined-ca-bundle\") pod \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.524287 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-ovndb-tls-certs\") pod \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.524407 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-config\") pod \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.524515 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-httpd-config\") pod \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\" (UID: \"6b1ea57e-dcf2-4e47-8650-af483b18ea8f\") " Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.535814 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-kube-api-access-vjthv" (OuterVolumeSpecName: "kube-api-access-vjthv") pod "6b1ea57e-dcf2-4e47-8650-af483b18ea8f" (UID: "6b1ea57e-dcf2-4e47-8650-af483b18ea8f"). InnerVolumeSpecName "kube-api-access-vjthv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.536177 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6b1ea57e-dcf2-4e47-8650-af483b18ea8f" (UID: "6b1ea57e-dcf2-4e47-8650-af483b18ea8f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.585352 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b1ea57e-dcf2-4e47-8650-af483b18ea8f" (UID: "6b1ea57e-dcf2-4e47-8650-af483b18ea8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.600476 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-config" (OuterVolumeSpecName: "config") pod "6b1ea57e-dcf2-4e47-8650-af483b18ea8f" (UID: "6b1ea57e-dcf2-4e47-8650-af483b18ea8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.627452 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.627493 4922 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.627507 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjthv\" (UniqueName: \"kubernetes.io/projected/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-kube-api-access-vjthv\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.627519 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.635524 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6b1ea57e-dcf2-4e47-8650-af483b18ea8f" (UID: "6b1ea57e-dcf2-4e47-8650-af483b18ea8f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.729642 4922 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1ea57e-dcf2-4e47-8650-af483b18ea8f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.885198 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-979b8465b-gmztk" event={"ID":"6b1ea57e-dcf2-4e47-8650-af483b18ea8f","Type":"ContainerDied","Data":"d41035e2a8cf606b335f710a1d0f91690ca9a161e3cc25d246206a6e3ab38420"} Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.885261 4922 scope.go:117] "RemoveContainer" containerID="dd50276aac03478ea25adcde79bcd1337f3694428dc64ce90dcf6fe92e773c3c" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.885260 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-979b8465b-gmztk" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.918565 4922 scope.go:117] "RemoveContainer" containerID="c50b1f228a346e798dee756ffe7172e5752cdc2bb9dd07e1f2ab8318d4e33a78" Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.928535 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-979b8465b-gmztk"] Feb 18 11:58:03 crc kubenswrapper[4922]: I0218 11:58:03.936675 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-979b8465b-gmztk"] Feb 18 11:58:04 crc kubenswrapper[4922]: I0218 11:58:04.988623 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" path="/var/lib/kubelet/pods/6b1ea57e-dcf2-4e47-8650-af483b18ea8f/volumes" Feb 18 11:58:05 crc kubenswrapper[4922]: I0218 11:58:05.907230 4922 generic.go:334] "Generic (PLEG): container finished" podID="ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" containerID="38158e702c80c7a31868549296123efd8bf7bfe465aae6ab472768cf855c3a38" exitCode=0 Feb 18 11:58:05 crc kubenswrapper[4922]: I0218 11:58:05.907325 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" event={"ID":"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5","Type":"ContainerDied","Data":"38158e702c80c7a31868549296123efd8bf7bfe465aae6ab472768cf855c3a38"} Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.256285 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.291719 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-config-data\") pod \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.291856 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-scripts\") pod \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.291892 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdr6h\" (UniqueName: \"kubernetes.io/projected/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-kube-api-access-hdr6h\") pod \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.291933 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-combined-ca-bundle\") pod \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\" (UID: \"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5\") " Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.298125 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-scripts" (OuterVolumeSpecName: "scripts") pod "ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" (UID: "ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.298467 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-kube-api-access-hdr6h" (OuterVolumeSpecName: "kube-api-access-hdr6h") pod "ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" (UID: "ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5"). InnerVolumeSpecName "kube-api-access-hdr6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.326535 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" (UID: "ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.326584 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-config-data" (OuterVolumeSpecName: "config-data") pod "ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" (UID: "ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.393911 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.393971 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.393981 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdr6h\" (UniqueName: \"kubernetes.io/projected/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-kube-api-access-hdr6h\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.393993 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.925188 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" event={"ID":"ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5","Type":"ContainerDied","Data":"19238151c6895845aeb1fcd79101b75a8cace32e850eb034480705b42b7e45a0"} Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.925238 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19238151c6895845aeb1fcd79101b75a8cace32e850eb034480705b42b7e45a0" Feb 18 11:58:07 crc kubenswrapper[4922]: I0218 11:58:07.925238 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ws7m8" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.028836 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 11:58:08 crc kubenswrapper[4922]: E0218 11:58:08.031271 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerName="neutron-httpd" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.031518 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerName="neutron-httpd" Feb 18 11:58:08 crc kubenswrapper[4922]: E0218 11:58:08.031673 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerName="neutron-api" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.031968 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerName="neutron-api" Feb 18 11:58:08 crc kubenswrapper[4922]: E0218 11:58:08.032100 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" containerName="nova-cell0-conductor-db-sync" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.032172 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" containerName="nova-cell0-conductor-db-sync" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.032512 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerName="neutron-api" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.032609 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" containerName="nova-cell0-conductor-db-sync" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.032793 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1ea57e-dcf2-4e47-8650-af483b18ea8f" containerName="neutron-httpd" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.033746 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.035745 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.037578 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m8flj" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.040468 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.107128 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a95479a-1834-4e95-b18a-c0bcef05f7ed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.107191 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9nd\" (UniqueName: \"kubernetes.io/projected/4a95479a-1834-4e95-b18a-c0bcef05f7ed-kube-api-access-vp9nd\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.107443 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a95479a-1834-4e95-b18a-c0bcef05f7ed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.209281 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a95479a-1834-4e95-b18a-c0bcef05f7ed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.209669 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9nd\" (UniqueName: \"kubernetes.io/projected/4a95479a-1834-4e95-b18a-c0bcef05f7ed-kube-api-access-vp9nd\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.209859 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a95479a-1834-4e95-b18a-c0bcef05f7ed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.213249 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a95479a-1834-4e95-b18a-c0bcef05f7ed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.219193 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a95479a-1834-4e95-b18a-c0bcef05f7ed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.229613 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9nd\" (UniqueName: \"kubernetes.io/projected/4a95479a-1834-4e95-b18a-c0bcef05f7ed-kube-api-access-vp9nd\") pod \"nova-cell0-conductor-0\" (UID: \"4a95479a-1834-4e95-b18a-c0bcef05f7ed\") " pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.354079 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.796102 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.944728 4922 generic.go:334] "Generic (PLEG): container finished" podID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerID="6c511819319a5fc193d0f75ed63d9083c79d2c12407691912b456da48212acab" exitCode=0 Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.944887 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerDied","Data":"6c511819319a5fc193d0f75ed63d9083c79d2c12407691912b456da48212acab"} Feb 18 11:58:08 crc kubenswrapper[4922]: I0218 11:58:08.962112 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4a95479a-1834-4e95-b18a-c0bcef05f7ed","Type":"ContainerStarted","Data":"f42d55be5c9e0a17431f7ff37043f8713b027dcd2e1d8947123ee2cafd90afd7"} Feb 18 11:58:09 crc kubenswrapper[4922]: I0218 11:58:09.974302 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4a95479a-1834-4e95-b18a-c0bcef05f7ed","Type":"ContainerStarted","Data":"02ac8807bfca9d0dfa78345a198171fd9c85f24ad621752cf9fc96c02e8473c1"} Feb 18 11:58:09 crc kubenswrapper[4922]: I0218 11:58:09.974606 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:09 crc kubenswrapper[4922]: I0218 11:58:09.999877 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.9998415349999998 podStartE2EDuration="1.999841535s" podCreationTimestamp="2026-02-18 11:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:09.994388978 +0000 UTC m=+1291.722093078" watchObservedRunningTime="2026-02-18 11:58:09.999841535 +0000 UTC m=+1291.727545615" Feb 18 11:58:10 crc kubenswrapper[4922]: I0218 11:58:10.248873 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 11:58:10 crc kubenswrapper[4922]: I0218 11:58:10.274295 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 11:58:10 crc kubenswrapper[4922]: I0218 11:58:10.984711 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 11:58:11 crc kubenswrapper[4922]: I0218 11:58:11.008860 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.380309 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.803128 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gts9l"] Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.804706 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.806824 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.808208 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.814587 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gts9l"] Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.931924 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-scripts\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.932026 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2rwd\" (UniqueName: \"kubernetes.io/projected/95fc0adb-b8ae-4fd6-88eb-3b6357173103-kube-api-access-r2rwd\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.932108 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-config-data\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.932249 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.955975 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.958209 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.964823 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 11:58:13 crc kubenswrapper[4922]: I0218 11:58:13.995870 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.035617 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.035671 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2856a778-a8b2-4740-8d2a-4a6f64619bc2-logs\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.035694 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-scripts\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.035759 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2rwd\" (UniqueName: \"kubernetes.io/projected/95fc0adb-b8ae-4fd6-88eb-3b6357173103-kube-api-access-r2rwd\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.035796 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-config-data\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.035833 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-config-data\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.035874 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.035913 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqjqm\" (UniqueName: \"kubernetes.io/projected/2856a778-a8b2-4740-8d2a-4a6f64619bc2-kube-api-access-bqjqm\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.054959 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.056470 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.058195 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-config-data\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.060905 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-scripts\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.061566 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.076187 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.078256 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.105501 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2rwd\" (UniqueName: \"kubernetes.io/projected/95fc0adb-b8ae-4fd6-88eb-3b6357173103-kube-api-access-r2rwd\") pod \"nova-cell0-cell-mapping-gts9l\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.124220 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.137792 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.137856 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqjqm\" (UniqueName: \"kubernetes.io/projected/2856a778-a8b2-4740-8d2a-4a6f64619bc2-kube-api-access-bqjqm\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.137911 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2856a778-a8b2-4740-8d2a-4a6f64619bc2-logs\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.137942 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5hb\" (UniqueName: \"kubernetes.io/projected/3ce92cda-4459-4a73-8fc2-84bbb56eccce-kube-api-access-mv5hb\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.137999 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.138022 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-config-data\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.138061 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.139202 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2856a778-a8b2-4740-8d2a-4a6f64619bc2-logs\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.144292 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-config-data\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.153437 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.174557 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqjqm\" (UniqueName: \"kubernetes.io/projected/2856a778-a8b2-4740-8d2a-4a6f64619bc2-kube-api-access-bqjqm\") pod \"nova-api-0\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.184234 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.186050 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.194159 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.213200 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.245118 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66vrt\" (UniqueName: \"kubernetes.io/projected/eea42093-2f99-433e-8cde-fe075d89d91f-kube-api-access-66vrt\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.245164 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.245246 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea42093-2f99-433e-8cde-fe075d89d91f-logs\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.245267 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-config-data\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.245304 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5hb\" (UniqueName: \"kubernetes.io/projected/3ce92cda-4459-4a73-8fc2-84bbb56eccce-kube-api-access-mv5hb\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.245339 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.245388 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.249984 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.268586 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5hb\" (UniqueName: \"kubernetes.io/projected/3ce92cda-4459-4a73-8fc2-84bbb56eccce-kube-api-access-mv5hb\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.269996 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.281518 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.282671 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.282914 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.286704 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.309342 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.340790 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pc7hm"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.343541 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.347563 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.347644 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvg5\" (UniqueName: \"kubernetes.io/projected/4e0d2e17-4045-420d-817b-41a1fc66c425-kube-api-access-qvvg5\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.347720 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66vrt\" (UniqueName: \"kubernetes.io/projected/eea42093-2f99-433e-8cde-fe075d89d91f-kube-api-access-66vrt\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.347795 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.347850 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea42093-2f99-433e-8cde-fe075d89d91f-logs\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.347876 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-config-data\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.347910 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-config-data\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.350892 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea42093-2f99-433e-8cde-fe075d89d91f-logs\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.353112 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.361604 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-config-data\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.365175 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pc7hm"] Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.379178 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.380209 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66vrt\" (UniqueName: \"kubernetes.io/projected/eea42093-2f99-433e-8cde-fe075d89d91f-kube-api-access-66vrt\") pod \"nova-metadata-0\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.408589 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450147 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29tzw\" (UniqueName: \"kubernetes.io/projected/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-kube-api-access-29tzw\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450206 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450251 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450289 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvg5\" (UniqueName: \"kubernetes.io/projected/4e0d2e17-4045-420d-817b-41a1fc66c425-kube-api-access-qvvg5\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450372 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450432 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450452 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-config\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450494 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.450555 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-config-data\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.455092 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.457832 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-config-data\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.469965 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvg5\" (UniqueName: \"kubernetes.io/projected/4e0d2e17-4045-420d-817b-41a1fc66c425-kube-api-access-qvvg5\") pod \"nova-scheduler-0\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.554082 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29tzw\" (UniqueName: \"kubernetes.io/projected/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-kube-api-access-29tzw\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.554178 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.554248 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.554389 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.554428 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.554470 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-config\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.556032 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-config\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.556032 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.556804 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.557863 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.557916 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.588122 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29tzw\" (UniqueName: \"kubernetes.io/projected/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-kube-api-access-29tzw\") pod \"dnsmasq-dns-bccf8f775-pc7hm\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.727875 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.745762 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:14 crc kubenswrapper[4922]: I0218 11:58:14.811517 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gts9l"] Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.041202 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.081842 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gts9l" event={"ID":"95fc0adb-b8ae-4fd6-88eb-3b6357173103","Type":"ContainerStarted","Data":"3dd9e8b282e0169c21e93008600de3cbb1dc520cc5c66cbb90c2934ce89d2770"} Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.085476 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zqpbh"] Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.093435 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.101739 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zqpbh"] Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.104395 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.107960 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.108787 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.120266 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.169926 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.170157 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-config-data\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.170462 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-scripts\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.170498 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zsk\" (UniqueName: \"kubernetes.io/projected/aacb8ffe-ff30-4292-b253-1e12d07f499b-kube-api-access-q8zsk\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.278495 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-scripts\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.278559 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8zsk\" (UniqueName: \"kubernetes.io/projected/aacb8ffe-ff30-4292-b253-1e12d07f499b-kube-api-access-q8zsk\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.278730 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.279052 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-config-data\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.285563 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-config-data\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.286960 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-scripts\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.292050 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.300875 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8zsk\" (UniqueName: \"kubernetes.io/projected/aacb8ffe-ff30-4292-b253-1e12d07f499b-kube-api-access-q8zsk\") pod \"nova-cell1-conductor-db-sync-zqpbh\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.387162 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.491560 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:15 crc kubenswrapper[4922]: I0218 11:58:15.531467 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pc7hm"] Feb 18 11:58:15 crc kubenswrapper[4922]: W0218 11:58:15.542106 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fe07f11_0f10_4aa7_ab94_51d42b7a6367.slice/crio-0199a345c591cc6803424fbda06bc8be81c6f2649bc4692acfd3ea6b8d197abc WatchSource:0}: Error finding container 0199a345c591cc6803424fbda06bc8be81c6f2649bc4692acfd3ea6b8d197abc: Status 404 returned error can't find the container with id 0199a345c591cc6803424fbda06bc8be81c6f2649bc4692acfd3ea6b8d197abc Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.022228 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zqpbh"] Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.106895 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" event={"ID":"aacb8ffe-ff30-4292-b253-1e12d07f499b","Type":"ContainerStarted","Data":"1ed9fd2d7f07e8fb94873887c1633bde383c19238f884a2aec0494c3844e1788"} Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.108253 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eea42093-2f99-433e-8cde-fe075d89d91f","Type":"ContainerStarted","Data":"6f453239385971f9189e759542805bb61c809c6dd720600811af9a9f4a7ac835"} Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.111408 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ce92cda-4459-4a73-8fc2-84bbb56eccce","Type":"ContainerStarted","Data":"92d7d00c3ed4a00a1aeac19f0bbbe19ab3df77be12277cadf25b39bf998465e7"} Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.114220 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gts9l" event={"ID":"95fc0adb-b8ae-4fd6-88eb-3b6357173103","Type":"ContainerStarted","Data":"3dd5bc6bbbea448a7d9229f94c3182f66fafad86532a2426f6a51eaa5b649205"} Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.135342 4922 generic.go:334] "Generic (PLEG): container finished" podID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerID="8aa54b45b2152668d79b56f9c12b91df1925011ab0dbd7a2601a1ffa9f2d27a9" exitCode=0 Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.135458 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" event={"ID":"5fe07f11-0f10-4aa7-ab94-51d42b7a6367","Type":"ContainerDied","Data":"8aa54b45b2152668d79b56f9c12b91df1925011ab0dbd7a2601a1ffa9f2d27a9"} Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.135484 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" event={"ID":"5fe07f11-0f10-4aa7-ab94-51d42b7a6367","Type":"ContainerStarted","Data":"0199a345c591cc6803424fbda06bc8be81c6f2649bc4692acfd3ea6b8d197abc"} Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.144703 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e0d2e17-4045-420d-817b-41a1fc66c425","Type":"ContainerStarted","Data":"5c4ed2cdb2b752aa88aa0c848f0558afd5579a29c6af7de94b0afbe9de2ec4ec"} Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.148672 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gts9l" podStartSLOduration=3.148649972 podStartE2EDuration="3.148649972s" podCreationTimestamp="2026-02-18 11:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:16.138924446 +0000 UTC m=+1297.866628546" watchObservedRunningTime="2026-02-18 11:58:16.148649972 +0000 UTC m=+1297.876354052" Feb 18 11:58:16 crc kubenswrapper[4922]: I0218 11:58:16.164578 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2856a778-a8b2-4740-8d2a-4a6f64619bc2","Type":"ContainerStarted","Data":"0bc208ef650f9f609657312f6d8f2198c5a12a9f657dff0516610817ba9c8516"} Feb 18 11:58:17 crc kubenswrapper[4922]: I0218 11:58:17.181039 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" event={"ID":"aacb8ffe-ff30-4292-b253-1e12d07f499b","Type":"ContainerStarted","Data":"ee60fdb1d51bfbe6bd8d0e50a4e4f1a8691171bcc2ba0c807ba59a9d88e2dc0d"} Feb 18 11:58:17 crc kubenswrapper[4922]: I0218 11:58:17.195525 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" podStartSLOduration=3.195506452 podStartE2EDuration="3.195506452s" podCreationTimestamp="2026-02-18 11:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:17.194809345 +0000 UTC m=+1298.922513425" watchObservedRunningTime="2026-02-18 11:58:17.195506452 +0000 UTC m=+1298.923210532" Feb 18 11:58:17 crc kubenswrapper[4922]: I0218 11:58:17.684564 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:17 crc kubenswrapper[4922]: I0218 11:58:17.699065 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.234338 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eea42093-2f99-433e-8cde-fe075d89d91f","Type":"ContainerStarted","Data":"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25"} Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.235047 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eea42093-2f99-433e-8cde-fe075d89d91f","Type":"ContainerStarted","Data":"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d"} Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.234512 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" containerName="nova-metadata-metadata" containerID="cri-o://b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25" gracePeriod=30 Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.234464 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" containerName="nova-metadata-log" containerID="cri-o://353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d" gracePeriod=30 Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.237571 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ce92cda-4459-4a73-8fc2-84bbb56eccce","Type":"ContainerStarted","Data":"c14ea122843eb522430115cdefefb0c3b7d6b58a0e5c203b5fb2fca56b1e56ba"} Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.238683 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3ce92cda-4459-4a73-8fc2-84bbb56eccce" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c14ea122843eb522430115cdefefb0c3b7d6b58a0e5c203b5fb2fca56b1e56ba" gracePeriod=30 Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.244666 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" event={"ID":"5fe07f11-0f10-4aa7-ab94-51d42b7a6367","Type":"ContainerStarted","Data":"42671006b982c23fe661235bf5a74dcd5e79ee9d03166f4a0fe4596b08069112"} Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.249328 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e0d2e17-4045-420d-817b-41a1fc66c425","Type":"ContainerStarted","Data":"c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85"} Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.252441 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2856a778-a8b2-4740-8d2a-4a6f64619bc2","Type":"ContainerStarted","Data":"783aeec10a7933650343fa4b55375b721d65807531d8e7dbf7c2823b6b487dd6"} Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.252493 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2856a778-a8b2-4740-8d2a-4a6f64619bc2","Type":"ContainerStarted","Data":"deb34b70b7dbc61de6d4d92ee5f97c64b68483ab7a9de52c357c2663ff82c337"} Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.264604 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.384826183 podStartE2EDuration="7.264584092s" podCreationTimestamp="2026-02-18 11:58:14 +0000 UTC" firstStartedPulling="2026-02-18 11:58:15.090413335 +0000 UTC m=+1296.818117405" lastFinishedPulling="2026-02-18 11:58:19.970171234 +0000 UTC m=+1301.697875314" observedRunningTime="2026-02-18 11:58:21.255374379 +0000 UTC m=+1302.983078479" watchObservedRunningTime="2026-02-18 11:58:21.264584092 +0000 UTC m=+1302.992288172" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.288687 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.361677808 podStartE2EDuration="8.288664131s" podCreationTimestamp="2026-02-18 11:58:13 +0000 UTC" firstStartedPulling="2026-02-18 11:58:15.0415503 +0000 UTC m=+1296.769254380" lastFinishedPulling="2026-02-18 11:58:19.968536623 +0000 UTC m=+1301.696240703" observedRunningTime="2026-02-18 11:58:21.279109589 +0000 UTC m=+1303.006813679" watchObservedRunningTime="2026-02-18 11:58:21.288664131 +0000 UTC m=+1303.016368211" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.304595 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.729398631 podStartE2EDuration="7.304578043s" podCreationTimestamp="2026-02-18 11:58:14 +0000 UTC" firstStartedPulling="2026-02-18 11:58:15.39370412 +0000 UTC m=+1297.121408200" lastFinishedPulling="2026-02-18 11:58:19.968883522 +0000 UTC m=+1301.696587612" observedRunningTime="2026-02-18 11:58:21.293699348 +0000 UTC m=+1303.021403438" watchObservedRunningTime="2026-02-18 11:58:21.304578043 +0000 UTC m=+1303.032282123" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.320062 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.486165194 podStartE2EDuration="7.320041644s" podCreationTimestamp="2026-02-18 11:58:14 +0000 UTC" firstStartedPulling="2026-02-18 11:58:15.134734015 +0000 UTC m=+1296.862438095" lastFinishedPulling="2026-02-18 11:58:19.968610465 +0000 UTC m=+1301.696314545" observedRunningTime="2026-02-18 11:58:21.311427066 +0000 UTC m=+1303.039131146" watchObservedRunningTime="2026-02-18 11:58:21.320041644 +0000 UTC m=+1303.047745724" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.363525 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" podStartSLOduration=7.363503032 podStartE2EDuration="7.363503032s" podCreationTimestamp="2026-02-18 11:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:21.345964749 +0000 UTC m=+1303.073668829" watchObservedRunningTime="2026-02-18 11:58:21.363503032 +0000 UTC m=+1303.091207122" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.842766 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.935625 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-combined-ca-bundle\") pod \"eea42093-2f99-433e-8cde-fe075d89d91f\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.935695 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66vrt\" (UniqueName: \"kubernetes.io/projected/eea42093-2f99-433e-8cde-fe075d89d91f-kube-api-access-66vrt\") pod \"eea42093-2f99-433e-8cde-fe075d89d91f\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.935776 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-config-data\") pod \"eea42093-2f99-433e-8cde-fe075d89d91f\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.935823 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea42093-2f99-433e-8cde-fe075d89d91f-logs\") pod \"eea42093-2f99-433e-8cde-fe075d89d91f\" (UID: \"eea42093-2f99-433e-8cde-fe075d89d91f\") " Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.936803 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea42093-2f99-433e-8cde-fe075d89d91f-logs" (OuterVolumeSpecName: "logs") pod "eea42093-2f99-433e-8cde-fe075d89d91f" (UID: "eea42093-2f99-433e-8cde-fe075d89d91f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.941736 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea42093-2f99-433e-8cde-fe075d89d91f-kube-api-access-66vrt" (OuterVolumeSpecName: "kube-api-access-66vrt") pod "eea42093-2f99-433e-8cde-fe075d89d91f" (UID: "eea42093-2f99-433e-8cde-fe075d89d91f"). InnerVolumeSpecName "kube-api-access-66vrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.973147 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-config-data" (OuterVolumeSpecName: "config-data") pod "eea42093-2f99-433e-8cde-fe075d89d91f" (UID: "eea42093-2f99-433e-8cde-fe075d89d91f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:21 crc kubenswrapper[4922]: I0218 11:58:21.976969 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eea42093-2f99-433e-8cde-fe075d89d91f" (UID: "eea42093-2f99-433e-8cde-fe075d89d91f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.038009 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.038048 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66vrt\" (UniqueName: \"kubernetes.io/projected/eea42093-2f99-433e-8cde-fe075d89d91f-kube-api-access-66vrt\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.038065 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea42093-2f99-433e-8cde-fe075d89d91f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.038079 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eea42093-2f99-433e-8cde-fe075d89d91f-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.264963 4922 generic.go:334] "Generic (PLEG): container finished" podID="eea42093-2f99-433e-8cde-fe075d89d91f" containerID="b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25" exitCode=0 Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.265294 4922 generic.go:334] "Generic (PLEG): container finished" podID="eea42093-2f99-433e-8cde-fe075d89d91f" containerID="353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d" exitCode=143 Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.265211 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eea42093-2f99-433e-8cde-fe075d89d91f","Type":"ContainerDied","Data":"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25"} Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.265711 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eea42093-2f99-433e-8cde-fe075d89d91f","Type":"ContainerDied","Data":"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d"} Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.265733 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eea42093-2f99-433e-8cde-fe075d89d91f","Type":"ContainerDied","Data":"6f453239385971f9189e759542805bb61c809c6dd720600811af9a9f4a7ac835"} Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.265752 4922 scope.go:117] "RemoveContainer" containerID="b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.265176 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.267412 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.307887 4922 scope.go:117] "RemoveContainer" containerID="353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.333517 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.333841 4922 scope.go:117] "RemoveContainer" containerID="b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25" Feb 18 11:58:22 crc kubenswrapper[4922]: E0218 11:58:22.334583 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25\": container with ID starting with b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25 not found: ID does not exist" containerID="b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.334620 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25"} err="failed to get container status \"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25\": rpc error: code = NotFound desc = could not find container \"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25\": container with ID starting with b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25 not found: ID does not exist" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.334646 4922 scope.go:117] "RemoveContainer" containerID="353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d" Feb 18 11:58:22 crc kubenswrapper[4922]: E0218 11:58:22.335046 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d\": container with ID starting with 353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d not found: ID does not exist" containerID="353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.335063 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d"} err="failed to get container status \"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d\": rpc error: code = NotFound desc = could not find container \"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d\": container with ID starting with 353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d not found: ID does not exist" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.335084 4922 scope.go:117] "RemoveContainer" containerID="b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.335294 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25"} err="failed to get container status \"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25\": rpc error: code = NotFound desc = could not find container \"b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25\": container with ID starting with b8da702e013fe942a755036a03637ef1c9322a5d134a3e1f2777cc744e2c7d25 not found: ID does not exist" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.335313 4922 scope.go:117] "RemoveContainer" containerID="353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.335543 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d"} err="failed to get container status \"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d\": rpc error: code = NotFound desc = could not find container \"353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d\": container with ID starting with 353ea64b8a388da79d11ac959cbc398ef7009e7be3e2209ab8d4d8c02038874d not found: ID does not exist" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.338753 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.348004 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:22 crc kubenswrapper[4922]: E0218 11:58:22.352014 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" containerName="nova-metadata-log" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.352039 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" containerName="nova-metadata-log" Feb 18 11:58:22 crc kubenswrapper[4922]: E0218 11:58:22.352058 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" containerName="nova-metadata-metadata" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.352064 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" containerName="nova-metadata-metadata" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.352287 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" containerName="nova-metadata-metadata" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.352311 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" containerName="nova-metadata-log" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.353328 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.356172 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.356391 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.392388 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.444077 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda998c8-9655-49e8-ad74-689371f71535-logs\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.444227 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.444277 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-config-data\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.444315 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx867\" (UniqueName: \"kubernetes.io/projected/cda998c8-9655-49e8-ad74-689371f71535-kube-api-access-mx867\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.444350 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.545899 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.545960 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-config-data\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.545993 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx867\" (UniqueName: \"kubernetes.io/projected/cda998c8-9655-49e8-ad74-689371f71535-kube-api-access-mx867\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.546023 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.546090 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda998c8-9655-49e8-ad74-689371f71535-logs\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.546588 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda998c8-9655-49e8-ad74-689371f71535-logs\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.559604 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.560198 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-config-data\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.561081 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.562932 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx867\" (UniqueName: \"kubernetes.io/projected/cda998c8-9655-49e8-ad74-689371f71535-kube-api-access-mx867\") pod \"nova-metadata-0\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.674738 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:22 crc kubenswrapper[4922]: I0218 11:58:22.985173 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea42093-2f99-433e-8cde-fe075d89d91f" path="/var/lib/kubelet/pods/eea42093-2f99-433e-8cde-fe075d89d91f/volumes" Feb 18 11:58:23 crc kubenswrapper[4922]: I0218 11:58:23.131263 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:23 crc kubenswrapper[4922]: I0218 11:58:23.284371 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cda998c8-9655-49e8-ad74-689371f71535","Type":"ContainerStarted","Data":"356cb0c5496ed1db46c504b66fd0a7df5bfe8b80d7e99efd7e71f14288b55ff9"} Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.284497 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.284990 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.300159 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cda998c8-9655-49e8-ad74-689371f71535","Type":"ContainerStarted","Data":"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a"} Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.300239 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cda998c8-9655-49e8-ad74-689371f71535","Type":"ContainerStarted","Data":"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e"} Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.305879 4922 generic.go:334] "Generic (PLEG): container finished" podID="95fc0adb-b8ae-4fd6-88eb-3b6357173103" containerID="3dd5bc6bbbea448a7d9229f94c3182f66fafad86532a2426f6a51eaa5b649205" exitCode=0 Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.305993 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gts9l" event={"ID":"95fc0adb-b8ae-4fd6-88eb-3b6357173103","Type":"ContainerDied","Data":"3dd5bc6bbbea448a7d9229f94c3182f66fafad86532a2426f6a51eaa5b649205"} Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.309730 4922 generic.go:334] "Generic (PLEG): container finished" podID="aacb8ffe-ff30-4292-b253-1e12d07f499b" containerID="ee60fdb1d51bfbe6bd8d0e50a4e4f1a8691171bcc2ba0c807ba59a9d88e2dc0d" exitCode=0 Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.309797 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" event={"ID":"aacb8ffe-ff30-4292-b253-1e12d07f499b","Type":"ContainerDied","Data":"ee60fdb1d51bfbe6bd8d0e50a4e4f1a8691171bcc2ba0c807ba59a9d88e2dc0d"} Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.322111 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.322082643 podStartE2EDuration="2.322082643s" podCreationTimestamp="2026-02-18 11:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:24.321657922 +0000 UTC m=+1306.049362042" watchObservedRunningTime="2026-02-18 11:58:24.322082643 +0000 UTC m=+1306.049786793" Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.380595 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.728523 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.728595 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 11:58:24 crc kubenswrapper[4922]: I0218 11:58:24.766943 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.143508 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.350647 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.366509 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.366515 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.849230 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.855011 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.912970 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-scripts\") pod \"aacb8ffe-ff30-4292-b253-1e12d07f499b\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.913139 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-config-data\") pod \"aacb8ffe-ff30-4292-b253-1e12d07f499b\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.913458 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-combined-ca-bundle\") pod \"aacb8ffe-ff30-4292-b253-1e12d07f499b\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.913585 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8zsk\" (UniqueName: \"kubernetes.io/projected/aacb8ffe-ff30-4292-b253-1e12d07f499b-kube-api-access-q8zsk\") pod \"aacb8ffe-ff30-4292-b253-1e12d07f499b\" (UID: \"aacb8ffe-ff30-4292-b253-1e12d07f499b\") " Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.941942 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-scripts" (OuterVolumeSpecName: "scripts") pod "aacb8ffe-ff30-4292-b253-1e12d07f499b" (UID: "aacb8ffe-ff30-4292-b253-1e12d07f499b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.950617 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aacb8ffe-ff30-4292-b253-1e12d07f499b-kube-api-access-q8zsk" (OuterVolumeSpecName: "kube-api-access-q8zsk") pod "aacb8ffe-ff30-4292-b253-1e12d07f499b" (UID: "aacb8ffe-ff30-4292-b253-1e12d07f499b"). InnerVolumeSpecName "kube-api-access-q8zsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:25 crc kubenswrapper[4922]: I0218 11:58:25.970137 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aacb8ffe-ff30-4292-b253-1e12d07f499b" (UID: "aacb8ffe-ff30-4292-b253-1e12d07f499b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.006983 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-config-data" (OuterVolumeSpecName: "config-data") pod "aacb8ffe-ff30-4292-b253-1e12d07f499b" (UID: "aacb8ffe-ff30-4292-b253-1e12d07f499b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.021681 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-scripts\") pod \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.021810 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-combined-ca-bundle\") pod \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.021839 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-config-data\") pod \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.021869 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2rwd\" (UniqueName: \"kubernetes.io/projected/95fc0adb-b8ae-4fd6-88eb-3b6357173103-kube-api-access-r2rwd\") pod \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\" (UID: \"95fc0adb-b8ae-4fd6-88eb-3b6357173103\") " Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.022449 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.022471 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.022483 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb8ffe-ff30-4292-b253-1e12d07f499b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.022497 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8zsk\" (UniqueName: \"kubernetes.io/projected/aacb8ffe-ff30-4292-b253-1e12d07f499b-kube-api-access-q8zsk\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.027339 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-scripts" (OuterVolumeSpecName: "scripts") pod "95fc0adb-b8ae-4fd6-88eb-3b6357173103" (UID: "95fc0adb-b8ae-4fd6-88eb-3b6357173103"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.028959 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fc0adb-b8ae-4fd6-88eb-3b6357173103-kube-api-access-r2rwd" (OuterVolumeSpecName: "kube-api-access-r2rwd") pod "95fc0adb-b8ae-4fd6-88eb-3b6357173103" (UID: "95fc0adb-b8ae-4fd6-88eb-3b6357173103"). InnerVolumeSpecName "kube-api-access-r2rwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.053438 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95fc0adb-b8ae-4fd6-88eb-3b6357173103" (UID: "95fc0adb-b8ae-4fd6-88eb-3b6357173103"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.053856 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-config-data" (OuterVolumeSpecName: "config-data") pod "95fc0adb-b8ae-4fd6-88eb-3b6357173103" (UID: "95fc0adb-b8ae-4fd6-88eb-3b6357173103"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.124568 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2rwd\" (UniqueName: \"kubernetes.io/projected/95fc0adb-b8ae-4fd6-88eb-3b6357173103-kube-api-access-r2rwd\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.124614 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.124630 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.124643 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fc0adb-b8ae-4fd6-88eb-3b6357173103-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.327097 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gts9l" event={"ID":"95fc0adb-b8ae-4fd6-88eb-3b6357173103","Type":"ContainerDied","Data":"3dd9e8b282e0169c21e93008600de3cbb1dc520cc5c66cbb90c2934ce89d2770"} Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.327143 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dd9e8b282e0169c21e93008600de3cbb1dc520cc5c66cbb90c2934ce89d2770" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.327219 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gts9l" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.329967 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.331428 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zqpbh" event={"ID":"aacb8ffe-ff30-4292-b253-1e12d07f499b","Type":"ContainerDied","Data":"1ed9fd2d7f07e8fb94873887c1633bde383c19238f884a2aec0494c3844e1788"} Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.331479 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed9fd2d7f07e8fb94873887c1633bde383c19238f884a2aec0494c3844e1788" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.441458 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 11:58:26 crc kubenswrapper[4922]: E0218 11:58:26.443210 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacb8ffe-ff30-4292-b253-1e12d07f499b" containerName="nova-cell1-conductor-db-sync" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.443303 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacb8ffe-ff30-4292-b253-1e12d07f499b" containerName="nova-cell1-conductor-db-sync" Feb 18 11:58:26 crc kubenswrapper[4922]: E0218 11:58:26.443406 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fc0adb-b8ae-4fd6-88eb-3b6357173103" containerName="nova-manage" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.443462 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fc0adb-b8ae-4fd6-88eb-3b6357173103" containerName="nova-manage" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.443689 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacb8ffe-ff30-4292-b253-1e12d07f499b" containerName="nova-cell1-conductor-db-sync" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.443768 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fc0adb-b8ae-4fd6-88eb-3b6357173103" containerName="nova-manage" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.444460 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.452918 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.471328 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.533552 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbd2\" (UniqueName: \"kubernetes.io/projected/31ef9a9b-fedd-4afd-8582-19ef097c98a2-kube-api-access-9rbd2\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.533901 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ef9a9b-fedd-4afd-8582-19ef097c98a2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.534168 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ef9a9b-fedd-4afd-8582-19ef097c98a2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.549931 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.550180 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-log" containerID="cri-o://deb34b70b7dbc61de6d4d92ee5f97c64b68483ab7a9de52c357c2663ff82c337" gracePeriod=30 Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.550616 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-api" containerID="cri-o://783aeec10a7933650343fa4b55375b721d65807531d8e7dbf7c2823b6b487dd6" gracePeriod=30 Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.563098 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.577656 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.577941 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cda998c8-9655-49e8-ad74-689371f71535" containerName="nova-metadata-log" containerID="cri-o://c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e" gracePeriod=30 Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.578005 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cda998c8-9655-49e8-ad74-689371f71535" containerName="nova-metadata-metadata" containerID="cri-o://9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a" gracePeriod=30 Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.636534 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ef9a9b-fedd-4afd-8582-19ef097c98a2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.636859 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rbd2\" (UniqueName: \"kubernetes.io/projected/31ef9a9b-fedd-4afd-8582-19ef097c98a2-kube-api-access-9rbd2\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.636968 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ef9a9b-fedd-4afd-8582-19ef097c98a2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.641020 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31ef9a9b-fedd-4afd-8582-19ef097c98a2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.641270 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31ef9a9b-fedd-4afd-8582-19ef097c98a2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.657408 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rbd2\" (UniqueName: \"kubernetes.io/projected/31ef9a9b-fedd-4afd-8582-19ef097c98a2-kube-api-access-9rbd2\") pod \"nova-cell1-conductor-0\" (UID: \"31ef9a9b-fedd-4afd-8582-19ef097c98a2\") " pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:26 crc kubenswrapper[4922]: I0218 11:58:26.761796 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.314194 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.345300 4922 generic.go:334] "Generic (PLEG): container finished" podID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerID="deb34b70b7dbc61de6d4d92ee5f97c64b68483ab7a9de52c357c2663ff82c337" exitCode=143 Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.345403 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2856a778-a8b2-4740-8d2a-4a6f64619bc2","Type":"ContainerDied","Data":"deb34b70b7dbc61de6d4d92ee5f97c64b68483ab7a9de52c357c2663ff82c337"} Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.348415 4922 generic.go:334] "Generic (PLEG): container finished" podID="cda998c8-9655-49e8-ad74-689371f71535" containerID="9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a" exitCode=0 Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.348477 4922 generic.go:334] "Generic (PLEG): container finished" podID="cda998c8-9655-49e8-ad74-689371f71535" containerID="c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e" exitCode=143 Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.348558 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cda998c8-9655-49e8-ad74-689371f71535","Type":"ContainerDied","Data":"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a"} Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.348595 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cda998c8-9655-49e8-ad74-689371f71535","Type":"ContainerDied","Data":"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e"} Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.348613 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cda998c8-9655-49e8-ad74-689371f71535","Type":"ContainerDied","Data":"356cb0c5496ed1db46c504b66fd0a7df5bfe8b80d7e99efd7e71f14288b55ff9"} Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.348629 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.348648 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4e0d2e17-4045-420d-817b-41a1fc66c425" containerName="nova-scheduler-scheduler" containerID="cri-o://c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85" gracePeriod=30 Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.348637 4922 scope.go:117] "RemoveContainer" containerID="9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.363619 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.387571 4922 scope.go:117] "RemoveContainer" containerID="c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.423417 4922 scope.go:117] "RemoveContainer" containerID="9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a" Feb 18 11:58:27 crc kubenswrapper[4922]: E0218 11:58:27.423960 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a\": container with ID starting with 9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a not found: ID does not exist" containerID="9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.423993 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a"} err="failed to get container status \"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a\": rpc error: code = NotFound desc = could not find container \"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a\": container with ID starting with 9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a not found: ID does not exist" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.424016 4922 scope.go:117] "RemoveContainer" containerID="c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e" Feb 18 11:58:27 crc kubenswrapper[4922]: E0218 11:58:27.424305 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e\": container with ID starting with c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e not found: ID does not exist" containerID="c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.424325 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e"} err="failed to get container status \"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e\": rpc error: code = NotFound desc = could not find container \"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e\": container with ID starting with c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e not found: ID does not exist" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.424338 4922 scope.go:117] "RemoveContainer" containerID="9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.424588 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a"} err="failed to get container status \"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a\": rpc error: code = NotFound desc = could not find container \"9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a\": container with ID starting with 9dce9da90c9fbb3d596e5a3d01c25c49fc5f8665b585b44d2234a48680cd799a not found: ID does not exist" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.424618 4922 scope.go:117] "RemoveContainer" containerID="c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.424947 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e"} err="failed to get container status \"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e\": rpc error: code = NotFound desc = could not find container \"c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e\": container with ID starting with c33e043e4f1e87b96e6f147a1fcb7afe16fcf9841b5e0ebbe368f8d27947cc5e not found: ID does not exist" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.452904 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-nova-metadata-tls-certs\") pod \"cda998c8-9655-49e8-ad74-689371f71535\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.453077 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda998c8-9655-49e8-ad74-689371f71535-logs\") pod \"cda998c8-9655-49e8-ad74-689371f71535\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.453135 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx867\" (UniqueName: \"kubernetes.io/projected/cda998c8-9655-49e8-ad74-689371f71535-kube-api-access-mx867\") pod \"cda998c8-9655-49e8-ad74-689371f71535\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.453246 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-combined-ca-bundle\") pod \"cda998c8-9655-49e8-ad74-689371f71535\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.453321 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-config-data\") pod \"cda998c8-9655-49e8-ad74-689371f71535\" (UID: \"cda998c8-9655-49e8-ad74-689371f71535\") " Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.453562 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cda998c8-9655-49e8-ad74-689371f71535-logs" (OuterVolumeSpecName: "logs") pod "cda998c8-9655-49e8-ad74-689371f71535" (UID: "cda998c8-9655-49e8-ad74-689371f71535"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.453853 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda998c8-9655-49e8-ad74-689371f71535-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.460765 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda998c8-9655-49e8-ad74-689371f71535-kube-api-access-mx867" (OuterVolumeSpecName: "kube-api-access-mx867") pod "cda998c8-9655-49e8-ad74-689371f71535" (UID: "cda998c8-9655-49e8-ad74-689371f71535"). InnerVolumeSpecName "kube-api-access-mx867". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.489567 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-config-data" (OuterVolumeSpecName: "config-data") pod "cda998c8-9655-49e8-ad74-689371f71535" (UID: "cda998c8-9655-49e8-ad74-689371f71535"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.493349 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cda998c8-9655-49e8-ad74-689371f71535" (UID: "cda998c8-9655-49e8-ad74-689371f71535"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.523517 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cda998c8-9655-49e8-ad74-689371f71535" (UID: "cda998c8-9655-49e8-ad74-689371f71535"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.555709 4922 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.555962 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx867\" (UniqueName: \"kubernetes.io/projected/cda998c8-9655-49e8-ad74-689371f71535-kube-api-access-mx867\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.555972 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.555982 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda998c8-9655-49e8-ad74-689371f71535-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.689416 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.707455 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.719222 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:27 crc kubenswrapper[4922]: E0218 11:58:27.719660 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda998c8-9655-49e8-ad74-689371f71535" containerName="nova-metadata-log" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.719672 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda998c8-9655-49e8-ad74-689371f71535" containerName="nova-metadata-log" Feb 18 11:58:27 crc kubenswrapper[4922]: E0218 11:58:27.719692 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda998c8-9655-49e8-ad74-689371f71535" containerName="nova-metadata-metadata" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.719700 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda998c8-9655-49e8-ad74-689371f71535" containerName="nova-metadata-metadata" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.719910 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda998c8-9655-49e8-ad74-689371f71535" containerName="nova-metadata-metadata" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.719926 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda998c8-9655-49e8-ad74-689371f71535" containerName="nova-metadata-log" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.720999 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.724199 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.725962 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.741976 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.860926 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-logs\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.860980 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm79t\" (UniqueName: \"kubernetes.io/projected/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-kube-api-access-zm79t\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.861126 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.861263 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.861331 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-config-data\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.963628 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.963691 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.963710 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-config-data\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.963780 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-logs\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.963801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm79t\" (UniqueName: \"kubernetes.io/projected/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-kube-api-access-zm79t\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.964621 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-logs\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.969474 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-config-data\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.969799 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.987048 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:27 crc kubenswrapper[4922]: I0218 11:58:27.989844 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm79t\" (UniqueName: \"kubernetes.io/projected/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-kube-api-access-zm79t\") pod \"nova-metadata-0\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " pod="openstack/nova-metadata-0" Feb 18 11:58:28 crc kubenswrapper[4922]: I0218 11:58:28.069860 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:58:28 crc kubenswrapper[4922]: I0218 11:58:28.358925 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"31ef9a9b-fedd-4afd-8582-19ef097c98a2","Type":"ContainerStarted","Data":"dc8f2a77b2439df31b74dae3e22ccb5a2f39f8b3df7aa6e39e60625e8e0063ab"} Feb 18 11:58:28 crc kubenswrapper[4922]: I0218 11:58:28.359323 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"31ef9a9b-fedd-4afd-8582-19ef097c98a2","Type":"ContainerStarted","Data":"06bfc094ce0deb9842f7a22e016d3ee33b6481c041a25f6fda233b69aaff27d6"} Feb 18 11:58:28 crc kubenswrapper[4922]: I0218 11:58:28.360490 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:28 crc kubenswrapper[4922]: I0218 11:58:28.382679 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.382658528 podStartE2EDuration="2.382658528s" podCreationTimestamp="2026-02-18 11:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:28.376640916 +0000 UTC m=+1310.104344996" watchObservedRunningTime="2026-02-18 11:58:28.382658528 +0000 UTC m=+1310.110362608" Feb 18 11:58:28 crc kubenswrapper[4922]: I0218 11:58:28.536967 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:58:28 crc kubenswrapper[4922]: W0218 11:58:28.541633 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10a54dd7_a74b_49c4_a631_ad8fe2c22d58.slice/crio-0eb58513d2b717b0e4ac58c75a07be6926255b4cc98ac974a0fd2d38b35e5a01 WatchSource:0}: Error finding container 0eb58513d2b717b0e4ac58c75a07be6926255b4cc98ac974a0fd2d38b35e5a01: Status 404 returned error can't find the container with id 0eb58513d2b717b0e4ac58c75a07be6926255b4cc98ac974a0fd2d38b35e5a01 Feb 18 11:58:28 crc kubenswrapper[4922]: I0218 11:58:28.989004 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda998c8-9655-49e8-ad74-689371f71535" path="/var/lib/kubelet/pods/cda998c8-9655-49e8-ad74-689371f71535/volumes" Feb 18 11:58:29 crc kubenswrapper[4922]: I0218 11:58:29.374622 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a54dd7-a74b-49c4-a631-ad8fe2c22d58","Type":"ContainerStarted","Data":"6eb028dcb7cb22f62f9b70fd21469cacb245f308ad0c1ab142649395ab9a2db8"} Feb 18 11:58:29 crc kubenswrapper[4922]: I0218 11:58:29.374678 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a54dd7-a74b-49c4-a631-ad8fe2c22d58","Type":"ContainerStarted","Data":"b08e25eab8acf946afba995372397a3ab97e8a1f62eebeb6161f0fd5696132b0"} Feb 18 11:58:29 crc kubenswrapper[4922]: I0218 11:58:29.374692 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a54dd7-a74b-49c4-a631-ad8fe2c22d58","Type":"ContainerStarted","Data":"0eb58513d2b717b0e4ac58c75a07be6926255b4cc98ac974a0fd2d38b35e5a01"} Feb 18 11:58:29 crc kubenswrapper[4922]: I0218 11:58:29.404531 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.404512096 podStartE2EDuration="2.404512096s" podCreationTimestamp="2026-02-18 11:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:29.393504438 +0000 UTC m=+1311.121208508" watchObservedRunningTime="2026-02-18 11:58:29.404512096 +0000 UTC m=+1311.132216176" Feb 18 11:58:29 crc kubenswrapper[4922]: E0218 11:58:29.731591 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 11:58:29 crc kubenswrapper[4922]: E0218 11:58:29.733213 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 11:58:29 crc kubenswrapper[4922]: E0218 11:58:29.736261 4922 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 11:58:29 crc kubenswrapper[4922]: E0218 11:58:29.736308 4922 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4e0d2e17-4045-420d-817b-41a1fc66c425" containerName="nova-scheduler-scheduler" Feb 18 11:58:29 crc kubenswrapper[4922]: I0218 11:58:29.748571 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:58:29 crc kubenswrapper[4922]: I0218 11:58:29.821804 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-78vwz"] Feb 18 11:58:29 crc kubenswrapper[4922]: I0218 11:58:29.822082 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" podUID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" containerName="dnsmasq-dns" containerID="cri-o://dcc50eb32469aa9ff5b74469bad439e5bb4a6c4458c1ea0f88da9a06c0a2437a" gracePeriod=10 Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.386618 4922 generic.go:334] "Generic (PLEG): container finished" podID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerID="ede12ae546b38989e02144d9c29b45bfe17c0490fbfcd99cc1a79c54dc349009" exitCode=137 Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.386678 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerDied","Data":"ede12ae546b38989e02144d9c29b45bfe17c0490fbfcd99cc1a79c54dc349009"} Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.386990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01766e5f-d149-4175-9fdb-15e65b0e0665","Type":"ContainerDied","Data":"9caa36cbf4a061073a94710aba88a19e4ceb523f9ac965d8c82018ec849187a8"} Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.387007 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9caa36cbf4a061073a94710aba88a19e4ceb523f9ac965d8c82018ec849187a8" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.389042 4922 generic.go:334] "Generic (PLEG): container finished" podID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" containerID="dcc50eb32469aa9ff5b74469bad439e5bb4a6c4458c1ea0f88da9a06c0a2437a" exitCode=0 Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.389122 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" event={"ID":"d36f2285-2752-4cad-bf52-fe6ae0b262d1","Type":"ContainerDied","Data":"dcc50eb32469aa9ff5b74469bad439e5bb4a6c4458c1ea0f88da9a06c0a2437a"} Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.466878 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.475426 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530129 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-nb\") pod \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530190 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-combined-ca-bundle\") pod \"01766e5f-d149-4175-9fdb-15e65b0e0665\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530255 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-run-httpd\") pod \"01766e5f-d149-4175-9fdb-15e65b0e0665\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530337 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-sb\") pod \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530386 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-config-data\") pod \"01766e5f-d149-4175-9fdb-15e65b0e0665\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530447 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-scripts\") pod \"01766e5f-d149-4175-9fdb-15e65b0e0665\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530473 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkcct\" (UniqueName: \"kubernetes.io/projected/01766e5f-d149-4175-9fdb-15e65b0e0665-kube-api-access-bkcct\") pod \"01766e5f-d149-4175-9fdb-15e65b0e0665\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530498 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mphx\" (UniqueName: \"kubernetes.io/projected/d36f2285-2752-4cad-bf52-fe6ae0b262d1-kube-api-access-4mphx\") pod \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530587 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-config\") pod \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530615 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-swift-storage-0\") pod \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530647 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-sg-core-conf-yaml\") pod \"01766e5f-d149-4175-9fdb-15e65b0e0665\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530697 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-log-httpd\") pod \"01766e5f-d149-4175-9fdb-15e65b0e0665\" (UID: \"01766e5f-d149-4175-9fdb-15e65b0e0665\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.530737 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-svc\") pod \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\" (UID: \"d36f2285-2752-4cad-bf52-fe6ae0b262d1\") " Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.533196 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01766e5f-d149-4175-9fdb-15e65b0e0665" (UID: "01766e5f-d149-4175-9fdb-15e65b0e0665"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.535657 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01766e5f-d149-4175-9fdb-15e65b0e0665" (UID: "01766e5f-d149-4175-9fdb-15e65b0e0665"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.540770 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-scripts" (OuterVolumeSpecName: "scripts") pod "01766e5f-d149-4175-9fdb-15e65b0e0665" (UID: "01766e5f-d149-4175-9fdb-15e65b0e0665"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.543786 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36f2285-2752-4cad-bf52-fe6ae0b262d1-kube-api-access-4mphx" (OuterVolumeSpecName: "kube-api-access-4mphx") pod "d36f2285-2752-4cad-bf52-fe6ae0b262d1" (UID: "d36f2285-2752-4cad-bf52-fe6ae0b262d1"). InnerVolumeSpecName "kube-api-access-4mphx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.548608 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01766e5f-d149-4175-9fdb-15e65b0e0665-kube-api-access-bkcct" (OuterVolumeSpecName: "kube-api-access-bkcct") pod "01766e5f-d149-4175-9fdb-15e65b0e0665" (UID: "01766e5f-d149-4175-9fdb-15e65b0e0665"). InnerVolumeSpecName "kube-api-access-bkcct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.569414 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01766e5f-d149-4175-9fdb-15e65b0e0665" (UID: "01766e5f-d149-4175-9fdb-15e65b0e0665"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.588576 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d36f2285-2752-4cad-bf52-fe6ae0b262d1" (UID: "d36f2285-2752-4cad-bf52-fe6ae0b262d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.593292 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d36f2285-2752-4cad-bf52-fe6ae0b262d1" (UID: "d36f2285-2752-4cad-bf52-fe6ae0b262d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.603299 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-config" (OuterVolumeSpecName: "config") pod "d36f2285-2752-4cad-bf52-fe6ae0b262d1" (UID: "d36f2285-2752-4cad-bf52-fe6ae0b262d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.607259 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d36f2285-2752-4cad-bf52-fe6ae0b262d1" (UID: "d36f2285-2752-4cad-bf52-fe6ae0b262d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.608944 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d36f2285-2752-4cad-bf52-fe6ae0b262d1" (UID: "d36f2285-2752-4cad-bf52-fe6ae0b262d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.626552 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01766e5f-d149-4175-9fdb-15e65b0e0665" (UID: "01766e5f-d149-4175-9fdb-15e65b0e0665"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632851 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632887 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632899 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632908 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkcct\" (UniqueName: \"kubernetes.io/projected/01766e5f-d149-4175-9fdb-15e65b0e0665-kube-api-access-bkcct\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632918 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mphx\" (UniqueName: \"kubernetes.io/projected/d36f2285-2752-4cad-bf52-fe6ae0b262d1-kube-api-access-4mphx\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632927 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632935 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632942 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632950 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01766e5f-d149-4175-9fdb-15e65b0e0665-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632957 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632965 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d36f2285-2752-4cad-bf52-fe6ae0b262d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.632972 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.652324 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-config-data" (OuterVolumeSpecName: "config-data") pod "01766e5f-d149-4175-9fdb-15e65b0e0665" (UID: "01766e5f-d149-4175-9fdb-15e65b0e0665"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:30 crc kubenswrapper[4922]: I0218 11:58:30.734270 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01766e5f-d149-4175-9fdb-15e65b0e0665-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.401522 4922 generic.go:334] "Generic (PLEG): container finished" podID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerID="783aeec10a7933650343fa4b55375b721d65807531d8e7dbf7c2823b6b487dd6" exitCode=0 Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.401639 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2856a778-a8b2-4740-8d2a-4a6f64619bc2","Type":"ContainerDied","Data":"783aeec10a7933650343fa4b55375b721d65807531d8e7dbf7c2823b6b487dd6"} Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.405047 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" event={"ID":"d36f2285-2752-4cad-bf52-fe6ae0b262d1","Type":"ContainerDied","Data":"f6b2696bce7ccb6880bdda930a5ccfaf927c9ebc64dad81fb193a050fd9b8c85"} Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.405094 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.405118 4922 scope.go:117] "RemoveContainer" containerID="dcc50eb32469aa9ff5b74469bad439e5bb4a6c4458c1ea0f88da9a06c0a2437a" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.405128 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-78vwz" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.452138 4922 scope.go:117] "RemoveContainer" containerID="839de4434ebe21a5f0abbc718b56284e0f7743bf3463c809b6cae16fa7c2db5d" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.456454 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-78vwz"] Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.465110 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-78vwz"] Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.475831 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.486614 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500184 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:58:31 crc kubenswrapper[4922]: E0218 11:58:31.500627 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="ceilometer-central-agent" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500645 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="ceilometer-central-agent" Feb 18 11:58:31 crc kubenswrapper[4922]: E0218 11:58:31.500657 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" containerName="init" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500663 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" containerName="init" Feb 18 11:58:31 crc kubenswrapper[4922]: E0218 11:58:31.500675 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="proxy-httpd" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500682 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="proxy-httpd" Feb 18 11:58:31 crc kubenswrapper[4922]: E0218 11:58:31.500696 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" containerName="dnsmasq-dns" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500702 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" containerName="dnsmasq-dns" Feb 18 11:58:31 crc kubenswrapper[4922]: E0218 11:58:31.500713 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="sg-core" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500719 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="sg-core" Feb 18 11:58:31 crc kubenswrapper[4922]: E0218 11:58:31.500743 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="ceilometer-notification-agent" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500749 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="ceilometer-notification-agent" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500902 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="sg-core" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500915 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="ceilometer-central-agent" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500928 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" containerName="dnsmasq-dns" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500940 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="ceilometer-notification-agent" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.500951 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" containerName="proxy-httpd" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.502944 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.505783 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.507347 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.531105 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.557617 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-run-httpd\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.557716 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-scripts\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.557737 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-config-data\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.557762 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.557792 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.557853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2h7j\" (UniqueName: \"kubernetes.io/projected/e4572162-62bc-4e43-b260-497a609abd8e-kube-api-access-d2h7j\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.557878 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-log-httpd\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.636866 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.659579 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-run-httpd\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.659660 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-scripts\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.659681 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-config-data\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.659700 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.659723 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.659768 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2h7j\" (UniqueName: \"kubernetes.io/projected/e4572162-62bc-4e43-b260-497a609abd8e-kube-api-access-d2h7j\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.659790 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-log-httpd\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.660398 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-log-httpd\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.660405 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-run-httpd\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.666146 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-scripts\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.673700 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-config-data\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.675084 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.677104 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.677152 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2h7j\" (UniqueName: \"kubernetes.io/projected/e4572162-62bc-4e43-b260-497a609abd8e-kube-api-access-d2h7j\") pod \"ceilometer-0\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.761173 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-combined-ca-bundle\") pod \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.761384 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqjqm\" (UniqueName: \"kubernetes.io/projected/2856a778-a8b2-4740-8d2a-4a6f64619bc2-kube-api-access-bqjqm\") pod \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.761410 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-config-data\") pod \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.761487 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2856a778-a8b2-4740-8d2a-4a6f64619bc2-logs\") pod \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\" (UID: \"2856a778-a8b2-4740-8d2a-4a6f64619bc2\") " Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.762421 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2856a778-a8b2-4740-8d2a-4a6f64619bc2-logs" (OuterVolumeSpecName: "logs") pod "2856a778-a8b2-4740-8d2a-4a6f64619bc2" (UID: "2856a778-a8b2-4740-8d2a-4a6f64619bc2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.767022 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2856a778-a8b2-4740-8d2a-4a6f64619bc2-kube-api-access-bqjqm" (OuterVolumeSpecName: "kube-api-access-bqjqm") pod "2856a778-a8b2-4740-8d2a-4a6f64619bc2" (UID: "2856a778-a8b2-4740-8d2a-4a6f64619bc2"). InnerVolumeSpecName "kube-api-access-bqjqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.790904 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2856a778-a8b2-4740-8d2a-4a6f64619bc2" (UID: "2856a778-a8b2-4740-8d2a-4a6f64619bc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.794840 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-config-data" (OuterVolumeSpecName: "config-data") pod "2856a778-a8b2-4740-8d2a-4a6f64619bc2" (UID: "2856a778-a8b2-4740-8d2a-4a6f64619bc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.840876 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.863392 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.863420 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2856a778-a8b2-4740-8d2a-4a6f64619bc2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.863429 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqjqm\" (UniqueName: \"kubernetes.io/projected/2856a778-a8b2-4740-8d2a-4a6f64619bc2-kube-api-access-bqjqm\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:31 crc kubenswrapper[4922]: I0218 11:58:31.863437 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2856a778-a8b2-4740-8d2a-4a6f64619bc2-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.192335 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.270397 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-config-data\") pod \"4e0d2e17-4045-420d-817b-41a1fc66c425\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.270472 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvvg5\" (UniqueName: \"kubernetes.io/projected/4e0d2e17-4045-420d-817b-41a1fc66c425-kube-api-access-qvvg5\") pod \"4e0d2e17-4045-420d-817b-41a1fc66c425\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.270523 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-combined-ca-bundle\") pod \"4e0d2e17-4045-420d-817b-41a1fc66c425\" (UID: \"4e0d2e17-4045-420d-817b-41a1fc66c425\") " Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.275274 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0d2e17-4045-420d-817b-41a1fc66c425-kube-api-access-qvvg5" (OuterVolumeSpecName: "kube-api-access-qvvg5") pod "4e0d2e17-4045-420d-817b-41a1fc66c425" (UID: "4e0d2e17-4045-420d-817b-41a1fc66c425"). InnerVolumeSpecName "kube-api-access-qvvg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.319351 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-config-data" (OuterVolumeSpecName: "config-data") pod "4e0d2e17-4045-420d-817b-41a1fc66c425" (UID: "4e0d2e17-4045-420d-817b-41a1fc66c425"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.331940 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e0d2e17-4045-420d-817b-41a1fc66c425" (UID: "4e0d2e17-4045-420d-817b-41a1fc66c425"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.365810 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.373311 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.373345 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvvg5\" (UniqueName: \"kubernetes.io/projected/4e0d2e17-4045-420d-817b-41a1fc66c425-kube-api-access-qvvg5\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.373374 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0d2e17-4045-420d-817b-41a1fc66c425-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.417515 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerStarted","Data":"be12ad7e80f1808fcecd3d9fd6a8cd2df6d592559b9d82740e6f5d36a70ad362"} Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.419388 4922 generic.go:334] "Generic (PLEG): container finished" podID="4e0d2e17-4045-420d-817b-41a1fc66c425" containerID="c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85" exitCode=0 Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.419561 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e0d2e17-4045-420d-817b-41a1fc66c425","Type":"ContainerDied","Data":"c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85"} Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.419606 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e0d2e17-4045-420d-817b-41a1fc66c425","Type":"ContainerDied","Data":"5c4ed2cdb2b752aa88aa0c848f0558afd5579a29c6af7de94b0afbe9de2ec4ec"} Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.419634 4922 scope.go:117] "RemoveContainer" containerID="c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.419757 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.428209 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2856a778-a8b2-4740-8d2a-4a6f64619bc2","Type":"ContainerDied","Data":"0bc208ef650f9f609657312f6d8f2198c5a12a9f657dff0516610817ba9c8516"} Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.428283 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.462665 4922 scope.go:117] "RemoveContainer" containerID="c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85" Feb 18 11:58:32 crc kubenswrapper[4922]: E0218 11:58:32.463478 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85\": container with ID starting with c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85 not found: ID does not exist" containerID="c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.463669 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85"} err="failed to get container status \"c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85\": rpc error: code = NotFound desc = could not find container \"c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85\": container with ID starting with c2ec66f0a6fe1edbd0a3743686091f7121951868a859f3c215737d15adf40c85 not found: ID does not exist" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.463691 4922 scope.go:117] "RemoveContainer" containerID="783aeec10a7933650343fa4b55375b721d65807531d8e7dbf7c2823b6b487dd6" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.477832 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.490466 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.513884 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.524384 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.533999 4922 scope.go:117] "RemoveContainer" containerID="deb34b70b7dbc61de6d4d92ee5f97c64b68483ab7a9de52c357c2663ff82c337" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.536025 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: E0218 11:58:32.536443 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-api" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.536461 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-api" Feb 18 11:58:32 crc kubenswrapper[4922]: E0218 11:58:32.536488 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-log" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.536495 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-log" Feb 18 11:58:32 crc kubenswrapper[4922]: E0218 11:58:32.536521 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0d2e17-4045-420d-817b-41a1fc66c425" containerName="nova-scheduler-scheduler" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.536528 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0d2e17-4045-420d-817b-41a1fc66c425" containerName="nova-scheduler-scheduler" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.536702 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0d2e17-4045-420d-817b-41a1fc66c425" containerName="nova-scheduler-scheduler" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.536717 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-log" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.536735 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" containerName="nova-api-api" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.537350 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.539337 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.547519 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.561560 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.563203 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.567740 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.586582 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.688265 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.688353 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3c20f-b062-4987-bbc5-c0c030d5f340-logs\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.688415 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-config-data\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.688456 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwtr\" (UniqueName: \"kubernetes.io/projected/fe95484c-ea5d-4ea3-8915-bb6734014373-kube-api-access-bcwtr\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.688498 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.688527 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-config-data\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.688556 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vftr\" (UniqueName: \"kubernetes.io/projected/47d3c20f-b062-4987-bbc5-c0c030d5f340-kube-api-access-4vftr\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.789937 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.790043 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3c20f-b062-4987-bbc5-c0c030d5f340-logs\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.790086 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-config-data\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.790109 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwtr\" (UniqueName: \"kubernetes.io/projected/fe95484c-ea5d-4ea3-8915-bb6734014373-kube-api-access-bcwtr\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.790146 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.790183 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-config-data\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.790219 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vftr\" (UniqueName: \"kubernetes.io/projected/47d3c20f-b062-4987-bbc5-c0c030d5f340-kube-api-access-4vftr\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.790796 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3c20f-b062-4987-bbc5-c0c030d5f340-logs\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.798222 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-config-data\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.799320 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-config-data\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.800332 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.802742 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.808090 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwtr\" (UniqueName: \"kubernetes.io/projected/fe95484c-ea5d-4ea3-8915-bb6734014373-kube-api-access-bcwtr\") pod \"nova-scheduler-0\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.808655 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vftr\" (UniqueName: \"kubernetes.io/projected/47d3c20f-b062-4987-bbc5-c0c030d5f340-kube-api-access-4vftr\") pod \"nova-api-0\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.869037 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.891681 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.984969 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01766e5f-d149-4175-9fdb-15e65b0e0665" path="/var/lib/kubelet/pods/01766e5f-d149-4175-9fdb-15e65b0e0665/volumes" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.986130 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2856a778-a8b2-4740-8d2a-4a6f64619bc2" path="/var/lib/kubelet/pods/2856a778-a8b2-4740-8d2a-4a6f64619bc2/volumes" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.986730 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0d2e17-4045-420d-817b-41a1fc66c425" path="/var/lib/kubelet/pods/4e0d2e17-4045-420d-817b-41a1fc66c425/volumes" Feb 18 11:58:32 crc kubenswrapper[4922]: I0218 11:58:32.987757 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36f2285-2752-4cad-bf52-fe6ae0b262d1" path="/var/lib/kubelet/pods/d36f2285-2752-4cad-bf52-fe6ae0b262d1/volumes" Feb 18 11:58:33 crc kubenswrapper[4922]: I0218 11:58:33.070489 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 11:58:33 crc kubenswrapper[4922]: I0218 11:58:33.070567 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 11:58:33 crc kubenswrapper[4922]: W0218 11:58:33.329466 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47d3c20f_b062_4987_bbc5_c0c030d5f340.slice/crio-601e916b018d0517ad916460b508b132b7e210db7b6fd91891fecb64ba408b19 WatchSource:0}: Error finding container 601e916b018d0517ad916460b508b132b7e210db7b6fd91891fecb64ba408b19: Status 404 returned error can't find the container with id 601e916b018d0517ad916460b508b132b7e210db7b6fd91891fecb64ba408b19 Feb 18 11:58:33 crc kubenswrapper[4922]: I0218 11:58:33.333022 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:33 crc kubenswrapper[4922]: I0218 11:58:33.422708 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:58:33 crc kubenswrapper[4922]: I0218 11:58:33.448230 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fe95484c-ea5d-4ea3-8915-bb6734014373","Type":"ContainerStarted","Data":"bde417a0228740daf9a9d49c2dd39d718faec4df17eddbb3036309edba429d67"} Feb 18 11:58:33 crc kubenswrapper[4922]: I0218 11:58:33.453211 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerStarted","Data":"722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db"} Feb 18 11:58:33 crc kubenswrapper[4922]: I0218 11:58:33.458404 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47d3c20f-b062-4987-bbc5-c0c030d5f340","Type":"ContainerStarted","Data":"601e916b018d0517ad916460b508b132b7e210db7b6fd91891fecb64ba408b19"} Feb 18 11:58:34 crc kubenswrapper[4922]: I0218 11:58:34.489271 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerStarted","Data":"f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21"} Feb 18 11:58:34 crc kubenswrapper[4922]: I0218 11:58:34.493077 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47d3c20f-b062-4987-bbc5-c0c030d5f340","Type":"ContainerStarted","Data":"d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8"} Feb 18 11:58:34 crc kubenswrapper[4922]: I0218 11:58:34.493117 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47d3c20f-b062-4987-bbc5-c0c030d5f340","Type":"ContainerStarted","Data":"eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03"} Feb 18 11:58:34 crc kubenswrapper[4922]: I0218 11:58:34.498581 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fe95484c-ea5d-4ea3-8915-bb6734014373","Type":"ContainerStarted","Data":"8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5"} Feb 18 11:58:34 crc kubenswrapper[4922]: I0218 11:58:34.519167 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.519130412 podStartE2EDuration="2.519130412s" podCreationTimestamp="2026-02-18 11:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:34.515634014 +0000 UTC m=+1316.243338094" watchObservedRunningTime="2026-02-18 11:58:34.519130412 +0000 UTC m=+1316.246834502" Feb 18 11:58:34 crc kubenswrapper[4922]: I0218 11:58:34.550330 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5503119400000003 podStartE2EDuration="2.55031194s" podCreationTimestamp="2026-02-18 11:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:34.536630454 +0000 UTC m=+1316.264334534" watchObservedRunningTime="2026-02-18 11:58:34.55031194 +0000 UTC m=+1316.278016020" Feb 18 11:58:35 crc kubenswrapper[4922]: I0218 11:58:35.509996 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerStarted","Data":"6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6"} Feb 18 11:58:36 crc kubenswrapper[4922]: I0218 11:58:36.523840 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerStarted","Data":"3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112"} Feb 18 11:58:36 crc kubenswrapper[4922]: I0218 11:58:36.524802 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 11:58:36 crc kubenswrapper[4922]: I0218 11:58:36.560613 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.724613164 podStartE2EDuration="5.560579332s" podCreationTimestamp="2026-02-18 11:58:31 +0000 UTC" firstStartedPulling="2026-02-18 11:58:32.374029613 +0000 UTC m=+1314.101733693" lastFinishedPulling="2026-02-18 11:58:36.209995761 +0000 UTC m=+1317.937699861" observedRunningTime="2026-02-18 11:58:36.548265571 +0000 UTC m=+1318.275969641" watchObservedRunningTime="2026-02-18 11:58:36.560579332 +0000 UTC m=+1318.288283412" Feb 18 11:58:36 crc kubenswrapper[4922]: I0218 11:58:36.790832 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 11:58:37 crc kubenswrapper[4922]: I0218 11:58:37.870047 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 11:58:38 crc kubenswrapper[4922]: I0218 11:58:38.070797 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 11:58:38 crc kubenswrapper[4922]: I0218 11:58:38.070841 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 11:58:39 crc kubenswrapper[4922]: I0218 11:58:39.082616 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:58:39 crc kubenswrapper[4922]: I0218 11:58:39.082631 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 11:58:42 crc kubenswrapper[4922]: I0218 11:58:42.869804 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 11:58:42 crc kubenswrapper[4922]: I0218 11:58:42.892090 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 11:58:42 crc kubenswrapper[4922]: I0218 11:58:42.892402 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 11:58:42 crc kubenswrapper[4922]: I0218 11:58:42.903149 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 11:58:43 crc kubenswrapper[4922]: I0218 11:58:43.626753 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 11:58:43 crc kubenswrapper[4922]: I0218 11:58:43.974823 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:58:43 crc kubenswrapper[4922]: I0218 11:58:43.975243 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:58:48 crc kubenswrapper[4922]: I0218 11:58:48.076295 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 11:58:48 crc kubenswrapper[4922]: I0218 11:58:48.077608 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 11:58:48 crc kubenswrapper[4922]: I0218 11:58:48.082319 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 11:58:48 crc kubenswrapper[4922]: I0218 11:58:48.641451 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.685060 4922 generic.go:334] "Generic (PLEG): container finished" podID="3ce92cda-4459-4a73-8fc2-84bbb56eccce" containerID="c14ea122843eb522430115cdefefb0c3b7d6b58a0e5c203b5fb2fca56b1e56ba" exitCode=137 Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.686051 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ce92cda-4459-4a73-8fc2-84bbb56eccce","Type":"ContainerDied","Data":"c14ea122843eb522430115cdefefb0c3b7d6b58a0e5c203b5fb2fca56b1e56ba"} Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.686107 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ce92cda-4459-4a73-8fc2-84bbb56eccce","Type":"ContainerDied","Data":"92d7d00c3ed4a00a1aeac19f0bbbe19ab3df77be12277cadf25b39bf998465e7"} Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.686122 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d7d00c3ed4a00a1aeac19f0bbbe19ab3df77be12277cadf25b39bf998465e7" Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.710884 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.782961 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-config-data\") pod \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.783229 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-combined-ca-bundle\") pod \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.783302 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv5hb\" (UniqueName: \"kubernetes.io/projected/3ce92cda-4459-4a73-8fc2-84bbb56eccce-kube-api-access-mv5hb\") pod \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\" (UID: \"3ce92cda-4459-4a73-8fc2-84bbb56eccce\") " Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.792407 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce92cda-4459-4a73-8fc2-84bbb56eccce-kube-api-access-mv5hb" (OuterVolumeSpecName: "kube-api-access-mv5hb") pod "3ce92cda-4459-4a73-8fc2-84bbb56eccce" (UID: "3ce92cda-4459-4a73-8fc2-84bbb56eccce"). InnerVolumeSpecName "kube-api-access-mv5hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.813278 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-config-data" (OuterVolumeSpecName: "config-data") pod "3ce92cda-4459-4a73-8fc2-84bbb56eccce" (UID: "3ce92cda-4459-4a73-8fc2-84bbb56eccce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.815464 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ce92cda-4459-4a73-8fc2-84bbb56eccce" (UID: "3ce92cda-4459-4a73-8fc2-84bbb56eccce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.885316 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.885630 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv5hb\" (UniqueName: \"kubernetes.io/projected/3ce92cda-4459-4a73-8fc2-84bbb56eccce-kube-api-access-mv5hb\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:51 crc kubenswrapper[4922]: I0218 11:58:51.885644 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ce92cda-4459-4a73-8fc2-84bbb56eccce-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.696231 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.740581 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.764543 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.778948 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:52 crc kubenswrapper[4922]: E0218 11:58:52.779716 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce92cda-4459-4a73-8fc2-84bbb56eccce" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.779742 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce92cda-4459-4a73-8fc2-84bbb56eccce" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.780054 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce92cda-4459-4a73-8fc2-84bbb56eccce" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.783112 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.786480 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.786803 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.787001 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.797569 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.898989 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.899048 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.899847 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.899890 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.903899 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.904707 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2whs\" (UniqueName: \"kubernetes.io/projected/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-kube-api-access-h2whs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.904739 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.904748 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.905095 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.905169 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.905320 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:52 crc kubenswrapper[4922]: E0218 11:58:52.932199 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ce92cda_4459_4a73_8fc2_84bbb56eccce.slice/crio-92d7d00c3ed4a00a1aeac19f0bbbe19ab3df77be12277cadf25b39bf998465e7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ce92cda_4459_4a73_8fc2_84bbb56eccce.slice\": RecentStats: unable to find data in memory cache]" Feb 18 11:58:52 crc kubenswrapper[4922]: I0218 11:58:52.988150 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce92cda-4459-4a73-8fc2-84bbb56eccce" path="/var/lib/kubelet/pods/3ce92cda-4459-4a73-8fc2-84bbb56eccce/volumes" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.007044 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.007125 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.007223 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2whs\" (UniqueName: \"kubernetes.io/projected/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-kube-api-access-h2whs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.007243 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.007308 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.021823 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.029620 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.032026 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.035000 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.046094 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2whs\" (UniqueName: \"kubernetes.io/projected/5f598a92-b7cc-4584-9a17-d4c6d031ceeb-kube-api-access-h2whs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5f598a92-b7cc-4584-9a17-d4c6d031ceeb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.104731 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c2lrw"] Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.106496 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.110783 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.126293 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c2lrw"] Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.211783 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.211868 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.211902 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chbg5\" (UniqueName: \"kubernetes.io/projected/7ec5b650-c58d-4b8b-a903-7b95c211139c-kube-api-access-chbg5\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.211968 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.211999 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-config\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.212020 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.314137 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.314650 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chbg5\" (UniqueName: \"kubernetes.io/projected/7ec5b650-c58d-4b8b-a903-7b95c211139c-kube-api-access-chbg5\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.314882 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.314978 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-config\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.315014 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.315201 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.316280 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.316990 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.317486 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.318022 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.318354 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-config\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.352135 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chbg5\" (UniqueName: \"kubernetes.io/projected/7ec5b650-c58d-4b8b-a903-7b95c211139c-kube-api-access-chbg5\") pod \"dnsmasq-dns-cd5cbd7b9-c2lrw\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.460023 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.712224 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 11:58:53 crc kubenswrapper[4922]: I0218 11:58:53.999855 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c2lrw"] Feb 18 11:58:54 crc kubenswrapper[4922]: W0218 11:58:54.001546 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ec5b650_c58d_4b8b_a903_7b95c211139c.slice/crio-ec7c038a1e2201c0d4227eabf40ab3692ebca8c65c8f35a81b082989a7f57de6 WatchSource:0}: Error finding container ec7c038a1e2201c0d4227eabf40ab3692ebca8c65c8f35a81b082989a7f57de6: Status 404 returned error can't find the container with id ec7c038a1e2201c0d4227eabf40ab3692ebca8c65c8f35a81b082989a7f57de6 Feb 18 11:58:54 crc kubenswrapper[4922]: I0218 11:58:54.721223 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5f598a92-b7cc-4584-9a17-d4c6d031ceeb","Type":"ContainerStarted","Data":"003eac58c03e9c8f7ea6c029fc1db8f62780dac546972e4e818e3806deaeb204"} Feb 18 11:58:54 crc kubenswrapper[4922]: I0218 11:58:54.721832 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5f598a92-b7cc-4584-9a17-d4c6d031ceeb","Type":"ContainerStarted","Data":"cfc37d913d6a2fdff18a19dd632edb4a4e54665ecfc74a7e53fdf5d756b1cb82"} Feb 18 11:58:54 crc kubenswrapper[4922]: I0218 11:58:54.724975 4922 generic.go:334] "Generic (PLEG): container finished" podID="7ec5b650-c58d-4b8b-a903-7b95c211139c" containerID="002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a" exitCode=0 Feb 18 11:58:54 crc kubenswrapper[4922]: I0218 11:58:54.725655 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" event={"ID":"7ec5b650-c58d-4b8b-a903-7b95c211139c","Type":"ContainerDied","Data":"002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a"} Feb 18 11:58:54 crc kubenswrapper[4922]: I0218 11:58:54.725694 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" event={"ID":"7ec5b650-c58d-4b8b-a903-7b95c211139c","Type":"ContainerStarted","Data":"ec7c038a1e2201c0d4227eabf40ab3692ebca8c65c8f35a81b082989a7f57de6"} Feb 18 11:58:54 crc kubenswrapper[4922]: I0218 11:58:54.750020 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.749999754 podStartE2EDuration="2.749999754s" podCreationTimestamp="2026-02-18 11:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:54.742871094 +0000 UTC m=+1336.470575174" watchObservedRunningTime="2026-02-18 11:58:54.749999754 +0000 UTC m=+1336.477703834" Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.463791 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.676113 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.676493 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="proxy-httpd" containerID="cri-o://3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112" gracePeriod=30 Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.676528 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="ceilometer-notification-agent" containerID="cri-o://f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21" gracePeriod=30 Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.676528 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="sg-core" containerID="cri-o://6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6" gracePeriod=30 Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.676676 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="ceilometer-central-agent" containerID="cri-o://722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db" gracePeriod=30 Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.691920 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.214:3000/\": EOF" Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.740750 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" event={"ID":"7ec5b650-c58d-4b8b-a903-7b95c211139c","Type":"ContainerStarted","Data":"e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4"} Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.740899 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-log" containerID="cri-o://eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03" gracePeriod=30 Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.740990 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.741030 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-api" containerID="cri-o://d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8" gracePeriod=30 Feb 18 11:58:55 crc kubenswrapper[4922]: I0218 11:58:55.775975 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" podStartSLOduration=2.775958967 podStartE2EDuration="2.775958967s" podCreationTimestamp="2026-02-18 11:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:58:55.771991876 +0000 UTC m=+1337.499695956" watchObservedRunningTime="2026-02-18 11:58:55.775958967 +0000 UTC m=+1337.503663047" Feb 18 11:58:56 crc kubenswrapper[4922]: I0218 11:58:56.784816 4922 generic.go:334] "Generic (PLEG): container finished" podID="e4572162-62bc-4e43-b260-497a609abd8e" containerID="3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112" exitCode=0 Feb 18 11:58:56 crc kubenswrapper[4922]: I0218 11:58:56.785117 4922 generic.go:334] "Generic (PLEG): container finished" podID="e4572162-62bc-4e43-b260-497a609abd8e" containerID="6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6" exitCode=2 Feb 18 11:58:56 crc kubenswrapper[4922]: I0218 11:58:56.785131 4922 generic.go:334] "Generic (PLEG): container finished" podID="e4572162-62bc-4e43-b260-497a609abd8e" containerID="722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db" exitCode=0 Feb 18 11:58:56 crc kubenswrapper[4922]: I0218 11:58:56.785031 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerDied","Data":"3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112"} Feb 18 11:58:56 crc kubenswrapper[4922]: I0218 11:58:56.785214 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerDied","Data":"6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6"} Feb 18 11:58:56 crc kubenswrapper[4922]: I0218 11:58:56.785233 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerDied","Data":"722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db"} Feb 18 11:58:56 crc kubenswrapper[4922]: I0218 11:58:56.791043 4922 generic.go:334] "Generic (PLEG): container finished" podID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerID="eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03" exitCode=143 Feb 18 11:58:56 crc kubenswrapper[4922]: I0218 11:58:56.791107 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47d3c20f-b062-4987-bbc5-c0c030d5f340","Type":"ContainerDied","Data":"eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03"} Feb 18 11:58:58 crc kubenswrapper[4922]: I0218 11:58:58.112119 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.451871 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.550025 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-combined-ca-bundle\") pod \"47d3c20f-b062-4987-bbc5-c0c030d5f340\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.550109 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-config-data\") pod \"47d3c20f-b062-4987-bbc5-c0c030d5f340\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.550261 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3c20f-b062-4987-bbc5-c0c030d5f340-logs\") pod \"47d3c20f-b062-4987-bbc5-c0c030d5f340\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.550320 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vftr\" (UniqueName: \"kubernetes.io/projected/47d3c20f-b062-4987-bbc5-c0c030d5f340-kube-api-access-4vftr\") pod \"47d3c20f-b062-4987-bbc5-c0c030d5f340\" (UID: \"47d3c20f-b062-4987-bbc5-c0c030d5f340\") " Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.550631 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47d3c20f-b062-4987-bbc5-c0c030d5f340-logs" (OuterVolumeSpecName: "logs") pod "47d3c20f-b062-4987-bbc5-c0c030d5f340" (UID: "47d3c20f-b062-4987-bbc5-c0c030d5f340"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.551323 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47d3c20f-b062-4987-bbc5-c0c030d5f340-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.558398 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d3c20f-b062-4987-bbc5-c0c030d5f340-kube-api-access-4vftr" (OuterVolumeSpecName: "kube-api-access-4vftr") pod "47d3c20f-b062-4987-bbc5-c0c030d5f340" (UID: "47d3c20f-b062-4987-bbc5-c0c030d5f340"). InnerVolumeSpecName "kube-api-access-4vftr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.589153 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-config-data" (OuterVolumeSpecName: "config-data") pod "47d3c20f-b062-4987-bbc5-c0c030d5f340" (UID: "47d3c20f-b062-4987-bbc5-c0c030d5f340"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.603142 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47d3c20f-b062-4987-bbc5-c0c030d5f340" (UID: "47d3c20f-b062-4987-bbc5-c0c030d5f340"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.653412 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.653455 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47d3c20f-b062-4987-bbc5-c0c030d5f340-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.653467 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vftr\" (UniqueName: \"kubernetes.io/projected/47d3c20f-b062-4987-bbc5-c0c030d5f340-kube-api-access-4vftr\") on node \"crc\" DevicePath \"\"" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.820860 4922 generic.go:334] "Generic (PLEG): container finished" podID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerID="d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8" exitCode=0 Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.820900 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47d3c20f-b062-4987-bbc5-c0c030d5f340","Type":"ContainerDied","Data":"d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8"} Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.820924 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"47d3c20f-b062-4987-bbc5-c0c030d5f340","Type":"ContainerDied","Data":"601e916b018d0517ad916460b508b132b7e210db7b6fd91891fecb64ba408b19"} Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.820941 4922 scope.go:117] "RemoveContainer" containerID="d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.821071 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.845431 4922 scope.go:117] "RemoveContainer" containerID="eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.861011 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.873723 4922 scope.go:117] "RemoveContainer" containerID="d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8" Feb 18 11:58:59 crc kubenswrapper[4922]: E0218 11:58:59.874216 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8\": container with ID starting with d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8 not found: ID does not exist" containerID="d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.874245 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8"} err="failed to get container status \"d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8\": rpc error: code = NotFound desc = could not find container \"d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8\": container with ID starting with d99ebc30f0488568efddde0bb6744320dc826a942aa1a3c10cb346c0e3fb14c8 not found: ID does not exist" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.874266 4922 scope.go:117] "RemoveContainer" containerID="eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.874318 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:59 crc kubenswrapper[4922]: E0218 11:58:59.874594 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03\": container with ID starting with eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03 not found: ID does not exist" containerID="eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.874615 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03"} err="failed to get container status \"eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03\": rpc error: code = NotFound desc = could not find container \"eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03\": container with ID starting with eb934588c08751d8b95e01ad04d470827c74e1b9d1823bd58e871581661e4e03 not found: ID does not exist" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.885372 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:59 crc kubenswrapper[4922]: E0218 11:58:59.885847 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-api" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.885859 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-api" Feb 18 11:58:59 crc kubenswrapper[4922]: E0218 11:58:59.885871 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-log" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.885878 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-log" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.886061 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-log" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.886074 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" containerName="nova-api-api" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.887128 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.925309 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.925578 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.925621 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.947140 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.960139 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.960655 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-config-data\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.960780 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0a029b-ba40-494a-b439-5ddf2073ad00-logs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.960835 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn58b\" (UniqueName: \"kubernetes.io/projected/5c0a029b-ba40-494a-b439-5ddf2073ad00-kube-api-access-wn58b\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.961265 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:58:59 crc kubenswrapper[4922]: I0218 11:58:59.961431 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-public-tls-certs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.063334 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.063729 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-public-tls-certs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.063785 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.063870 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-config-data\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.063918 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0a029b-ba40-494a-b439-5ddf2073ad00-logs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.063951 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn58b\" (UniqueName: \"kubernetes.io/projected/5c0a029b-ba40-494a-b439-5ddf2073ad00-kube-api-access-wn58b\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.064933 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0a029b-ba40-494a-b439-5ddf2073ad00-logs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.071824 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.072902 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-public-tls-certs\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.075245 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-config-data\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.078252 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.089171 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn58b\" (UniqueName: \"kubernetes.io/projected/5c0a029b-ba40-494a-b439-5ddf2073ad00-kube-api-access-wn58b\") pod \"nova-api-0\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.260111 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.448386 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.589754 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-config-data\") pod \"e4572162-62bc-4e43-b260-497a609abd8e\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.589816 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-run-httpd\") pod \"e4572162-62bc-4e43-b260-497a609abd8e\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.589884 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-scripts\") pod \"e4572162-62bc-4e43-b260-497a609abd8e\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.589919 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2h7j\" (UniqueName: \"kubernetes.io/projected/e4572162-62bc-4e43-b260-497a609abd8e-kube-api-access-d2h7j\") pod \"e4572162-62bc-4e43-b260-497a609abd8e\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.589956 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-combined-ca-bundle\") pod \"e4572162-62bc-4e43-b260-497a609abd8e\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.590039 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-sg-core-conf-yaml\") pod \"e4572162-62bc-4e43-b260-497a609abd8e\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.590078 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-log-httpd\") pod \"e4572162-62bc-4e43-b260-497a609abd8e\" (UID: \"e4572162-62bc-4e43-b260-497a609abd8e\") " Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.591979 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4572162-62bc-4e43-b260-497a609abd8e" (UID: "e4572162-62bc-4e43-b260-497a609abd8e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.592097 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4572162-62bc-4e43-b260-497a609abd8e" (UID: "e4572162-62bc-4e43-b260-497a609abd8e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.604430 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-scripts" (OuterVolumeSpecName: "scripts") pod "e4572162-62bc-4e43-b260-497a609abd8e" (UID: "e4572162-62bc-4e43-b260-497a609abd8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.612548 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4572162-62bc-4e43-b260-497a609abd8e-kube-api-access-d2h7j" (OuterVolumeSpecName: "kube-api-access-d2h7j") pod "e4572162-62bc-4e43-b260-497a609abd8e" (UID: "e4572162-62bc-4e43-b260-497a609abd8e"). InnerVolumeSpecName "kube-api-access-d2h7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.628647 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4572162-62bc-4e43-b260-497a609abd8e" (UID: "e4572162-62bc-4e43-b260-497a609abd8e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.692013 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.692036 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.692047 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2h7j\" (UniqueName: \"kubernetes.io/projected/e4572162-62bc-4e43-b260-497a609abd8e-kube-api-access-d2h7j\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.692056 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.692064 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4572162-62bc-4e43-b260-497a609abd8e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.709229 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4572162-62bc-4e43-b260-497a609abd8e" (UID: "e4572162-62bc-4e43-b260-497a609abd8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.715680 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-config-data" (OuterVolumeSpecName: "config-data") pod "e4572162-62bc-4e43-b260-497a609abd8e" (UID: "e4572162-62bc-4e43-b260-497a609abd8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.793551 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.793759 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4572162-62bc-4e43-b260-497a609abd8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.793869 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.832431 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c0a029b-ba40-494a-b439-5ddf2073ad00","Type":"ContainerStarted","Data":"1e23b924bddad76596057cd410450387011174217e8c9d8f3144d75f72013eac"} Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.836103 4922 generic.go:334] "Generic (PLEG): container finished" podID="e4572162-62bc-4e43-b260-497a609abd8e" containerID="f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21" exitCode=0 Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.836143 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerDied","Data":"f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21"} Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.836450 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4572162-62bc-4e43-b260-497a609abd8e","Type":"ContainerDied","Data":"be12ad7e80f1808fcecd3d9fd6a8cd2df6d592559b9d82740e6f5d36a70ad362"} Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.836183 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.836510 4922 scope.go:117] "RemoveContainer" containerID="3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.862354 4922 scope.go:117] "RemoveContainer" containerID="6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.921424 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.939455 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.953724 4922 scope.go:117] "RemoveContainer" containerID="f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.964328 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:00 crc kubenswrapper[4922]: E0218 11:59:00.965010 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="proxy-httpd" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.965029 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="proxy-httpd" Feb 18 11:59:00 crc kubenswrapper[4922]: E0218 11:59:00.965054 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="ceilometer-central-agent" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.965063 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="ceilometer-central-agent" Feb 18 11:59:00 crc kubenswrapper[4922]: E0218 11:59:00.965084 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="sg-core" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.965092 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="sg-core" Feb 18 11:59:00 crc kubenswrapper[4922]: E0218 11:59:00.965107 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="ceilometer-notification-agent" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.965113 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="ceilometer-notification-agent" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.965310 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="ceilometer-notification-agent" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.965325 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="sg-core" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.965337 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="proxy-httpd" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.965356 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4572162-62bc-4e43-b260-497a609abd8e" containerName="ceilometer-central-agent" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.969575 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.976905 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:59:00 crc kubenswrapper[4922]: I0218 11:59:00.977423 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.082862 4922 scope.go:117] "RemoveContainer" containerID="722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.159854 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d3c20f-b062-4987-bbc5-c0c030d5f340" path="/var/lib/kubelet/pods/47d3c20f-b062-4987-bbc5-c0c030d5f340/volumes" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.162024 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4572162-62bc-4e43-b260-497a609abd8e" path="/var/lib/kubelet/pods/e4572162-62bc-4e43-b260-497a609abd8e/volumes" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.163841 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.174683 4922 scope.go:117] "RemoveContainer" containerID="3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112" Feb 18 11:59:01 crc kubenswrapper[4922]: E0218 11:59:01.175206 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112\": container with ID starting with 3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112 not found: ID does not exist" containerID="3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.175247 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112"} err="failed to get container status \"3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112\": rpc error: code = NotFound desc = could not find container \"3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112\": container with ID starting with 3883aa82d9558789bd29239e1692a77ac44f74c43c7e20ecd49ca776053c9112 not found: ID does not exist" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.175294 4922 scope.go:117] "RemoveContainer" containerID="6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6" Feb 18 11:59:01 crc kubenswrapper[4922]: E0218 11:59:01.175618 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6\": container with ID starting with 6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6 not found: ID does not exist" containerID="6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.175663 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6"} err="failed to get container status \"6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6\": rpc error: code = NotFound desc = could not find container \"6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6\": container with ID starting with 6f03b8ff2055021074f5cc7bd367566e2ac086e99a1b9b651b0a938adeca63c6 not found: ID does not exist" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.175685 4922 scope.go:117] "RemoveContainer" containerID="f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21" Feb 18 11:59:01 crc kubenswrapper[4922]: E0218 11:59:01.176076 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21\": container with ID starting with f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21 not found: ID does not exist" containerID="f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.176121 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21"} err="failed to get container status \"f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21\": rpc error: code = NotFound desc = could not find container \"f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21\": container with ID starting with f89cedadf6eff1297f8ed80ba94b9fc241d82002338f1e62865bf185883f0e21 not found: ID does not exist" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.176150 4922 scope.go:117] "RemoveContainer" containerID="722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db" Feb 18 11:59:01 crc kubenswrapper[4922]: E0218 11:59:01.176675 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db\": container with ID starting with 722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db not found: ID does not exist" containerID="722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.176726 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db"} err="failed to get container status \"722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db\": rpc error: code = NotFound desc = could not find container \"722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db\": container with ID starting with 722060fc2ca1ae896424d7a12df3ac6d06986256febfd98592ede2235bdf88db not found: ID does not exist" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.194960 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.195015 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-scripts\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.195202 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.195693 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-config-data\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.195732 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h25l\" (UniqueName: \"kubernetes.io/projected/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-kube-api-access-8h25l\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.195772 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.196068 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.297814 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.298215 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-config-data\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.298248 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h25l\" (UniqueName: \"kubernetes.io/projected/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-kube-api-access-8h25l\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.298275 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.298354 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.299120 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.299148 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-scripts\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.298443 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-log-httpd\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.299728 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-run-httpd\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.304425 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-scripts\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.305490 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.305587 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.320162 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-config-data\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.321173 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h25l\" (UniqueName: \"kubernetes.io/projected/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-kube-api-access-8h25l\") pod \"ceilometer-0\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.465755 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.860009 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c0a029b-ba40-494a-b439-5ddf2073ad00","Type":"ContainerStarted","Data":"1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94"} Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.860284 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c0a029b-ba40-494a-b439-5ddf2073ad00","Type":"ContainerStarted","Data":"43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422"} Feb 18 11:59:01 crc kubenswrapper[4922]: I0218 11:59:01.880871 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.880846613 podStartE2EDuration="2.880846613s" podCreationTimestamp="2026-02-18 11:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:59:01.87794129 +0000 UTC m=+1343.605645390" watchObservedRunningTime="2026-02-18 11:59:01.880846613 +0000 UTC m=+1343.608550693" Feb 18 11:59:02 crc kubenswrapper[4922]: I0218 11:59:02.020110 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:02 crc kubenswrapper[4922]: W0218 11:59:02.022866 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3b8f165_b92e_47d4_ada4_5eee351d6a5a.slice/crio-f48c4f106861c2b0fbbd89f3ec527e011b85b43938cee7887c256abc61904ff7 WatchSource:0}: Error finding container f48c4f106861c2b0fbbd89f3ec527e011b85b43938cee7887c256abc61904ff7: Status 404 returned error can't find the container with id f48c4f106861c2b0fbbd89f3ec527e011b85b43938cee7887c256abc61904ff7 Feb 18 11:59:02 crc kubenswrapper[4922]: I0218 11:59:02.875537 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerStarted","Data":"a917c6100631e0cecbb43e73dcdd3017b9ee5fcde8f4b2eb40ffa1cf7a6ddd61"} Feb 18 11:59:02 crc kubenswrapper[4922]: I0218 11:59:02.876054 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerStarted","Data":"f48c4f106861c2b0fbbd89f3ec527e011b85b43938cee7887c256abc61904ff7"} Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.112210 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.134765 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.461500 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.565989 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pc7hm"] Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.566197 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" podUID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerName="dnsmasq-dns" containerID="cri-o://42671006b982c23fe661235bf5a74dcd5e79ee9d03166f4a0fe4596b08069112" gracePeriod=10 Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.889606 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerStarted","Data":"c0b1ec47222cf49451bd1b033a64e0ee80b9aa5aed82b2529850f17df47b4532"} Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.894644 4922 generic.go:334] "Generic (PLEG): container finished" podID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerID="42671006b982c23fe661235bf5a74dcd5e79ee9d03166f4a0fe4596b08069112" exitCode=0 Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.896212 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" event={"ID":"5fe07f11-0f10-4aa7-ab94-51d42b7a6367","Type":"ContainerDied","Data":"42671006b982c23fe661235bf5a74dcd5e79ee9d03166f4a0fe4596b08069112"} Feb 18 11:59:03 crc kubenswrapper[4922]: I0218 11:59:03.910960 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.787480 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dqtvp"] Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.790831 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.792703 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.824876 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.837967 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqtvp"] Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.904124 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.904478 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-scripts\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.904525 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb558\" (UniqueName: \"kubernetes.io/projected/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-kube-api-access-fb558\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.904557 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-config-data\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.931252 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.936641 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" event={"ID":"5fe07f11-0f10-4aa7-ab94-51d42b7a6367","Type":"ContainerDied","Data":"0199a345c591cc6803424fbda06bc8be81c6f2649bc4692acfd3ea6b8d197abc"} Feb 18 11:59:04 crc kubenswrapper[4922]: I0218 11:59:04.936684 4922 scope.go:117] "RemoveContainer" containerID="42671006b982c23fe661235bf5a74dcd5e79ee9d03166f4a0fe4596b08069112" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.017702 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.017776 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-scripts\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.017817 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb558\" (UniqueName: \"kubernetes.io/projected/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-kube-api-access-fb558\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.017848 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-config-data\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.040387 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-config-data\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.044994 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-scripts\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.048995 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.055723 4922 scope.go:117] "RemoveContainer" containerID="8aa54b45b2152668d79b56f9c12b91df1925011ab0dbd7a2601a1ffa9f2d27a9" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.065917 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb558\" (UniqueName: \"kubernetes.io/projected/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-kube-api-access-fb558\") pod \"nova-cell1-cell-mapping-dqtvp\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.134859 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-sb\") pod \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.134941 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-svc\") pod \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.134991 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-config\") pod \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.135046 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-swift-storage-0\") pod \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.135121 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-nb\") pod \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.135148 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29tzw\" (UniqueName: \"kubernetes.io/projected/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-kube-api-access-29tzw\") pod \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\" (UID: \"5fe07f11-0f10-4aa7-ab94-51d42b7a6367\") " Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.157102 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-kube-api-access-29tzw" (OuterVolumeSpecName: "kube-api-access-29tzw") pod "5fe07f11-0f10-4aa7-ab94-51d42b7a6367" (UID: "5fe07f11-0f10-4aa7-ab94-51d42b7a6367"). InnerVolumeSpecName "kube-api-access-29tzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.187532 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5fe07f11-0f10-4aa7-ab94-51d42b7a6367" (UID: "5fe07f11-0f10-4aa7-ab94-51d42b7a6367"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.201515 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-config" (OuterVolumeSpecName: "config") pod "5fe07f11-0f10-4aa7-ab94-51d42b7a6367" (UID: "5fe07f11-0f10-4aa7-ab94-51d42b7a6367"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.218786 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fe07f11-0f10-4aa7-ab94-51d42b7a6367" (UID: "5fe07f11-0f10-4aa7-ab94-51d42b7a6367"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.220590 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5fe07f11-0f10-4aa7-ab94-51d42b7a6367" (UID: "5fe07f11-0f10-4aa7-ab94-51d42b7a6367"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.238743 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.238802 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.238814 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29tzw\" (UniqueName: \"kubernetes.io/projected/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-kube-api-access-29tzw\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.238829 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.238857 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.238889 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5fe07f11-0f10-4aa7-ab94-51d42b7a6367" (UID: "5fe07f11-0f10-4aa7-ab94-51d42b7a6367"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.289482 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.340496 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe07f11-0f10-4aa7-ab94-51d42b7a6367-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.759416 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqtvp"] Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.964961 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerStarted","Data":"4e1f36256af229f8391ab980166248080c950f7ddf5a27ccd1a51d73b649da7a"} Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.966498 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" Feb 18 11:59:05 crc kubenswrapper[4922]: I0218 11:59:05.972255 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqtvp" event={"ID":"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc","Type":"ContainerStarted","Data":"55f5fe7cc5fea944498467b4885f7a182e7fdf03f39f9a7436de1aa8836d34cf"} Feb 18 11:59:06 crc kubenswrapper[4922]: I0218 11:59:06.003879 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pc7hm"] Feb 18 11:59:06 crc kubenswrapper[4922]: I0218 11:59:06.012998 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pc7hm"] Feb 18 11:59:06 crc kubenswrapper[4922]: I0218 11:59:06.986294 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" path="/var/lib/kubelet/pods/5fe07f11-0f10-4aa7-ab94-51d42b7a6367/volumes" Feb 18 11:59:06 crc kubenswrapper[4922]: I0218 11:59:06.988797 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqtvp" event={"ID":"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc","Type":"ContainerStarted","Data":"fe9e0b8c506169c8ff2962eae62f59ba2039b683aba9a221a651171f7491288f"} Feb 18 11:59:07 crc kubenswrapper[4922]: I0218 11:59:07.005744 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dqtvp" podStartSLOduration=3.005724069 podStartE2EDuration="3.005724069s" podCreationTimestamp="2026-02-18 11:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:59:07.003755859 +0000 UTC m=+1348.731459959" watchObservedRunningTime="2026-02-18 11:59:07.005724069 +0000 UTC m=+1348.733428149" Feb 18 11:59:09 crc kubenswrapper[4922]: I0218 11:59:09.009711 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerStarted","Data":"230a0bc54b8dd07f66eb86ad2215c814c3b880945116c4dab8556a97fba068aa"} Feb 18 11:59:09 crc kubenswrapper[4922]: I0218 11:59:09.010224 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 11:59:09 crc kubenswrapper[4922]: I0218 11:59:09.037964 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.294086494 podStartE2EDuration="9.037864193s" podCreationTimestamp="2026-02-18 11:59:00 +0000 UTC" firstStartedPulling="2026-02-18 11:59:02.028236269 +0000 UTC m=+1343.755940349" lastFinishedPulling="2026-02-18 11:59:07.772013968 +0000 UTC m=+1349.499718048" observedRunningTime="2026-02-18 11:59:09.028343462 +0000 UTC m=+1350.756047532" watchObservedRunningTime="2026-02-18 11:59:09.037864193 +0000 UTC m=+1350.765568273" Feb 18 11:59:09 crc kubenswrapper[4922]: I0218 11:59:09.755670 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-pc7hm" podUID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.209:5353: i/o timeout" Feb 18 11:59:10 crc kubenswrapper[4922]: I0218 11:59:10.261660 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 11:59:10 crc kubenswrapper[4922]: I0218 11:59:10.261969 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 11:59:11 crc kubenswrapper[4922]: I0218 11:59:11.277651 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.219:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 11:59:11 crc kubenswrapper[4922]: I0218 11:59:11.277713 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.219:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 11:59:12 crc kubenswrapper[4922]: I0218 11:59:12.036537 4922 generic.go:334] "Generic (PLEG): container finished" podID="b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" containerID="fe9e0b8c506169c8ff2962eae62f59ba2039b683aba9a221a651171f7491288f" exitCode=0 Feb 18 11:59:12 crc kubenswrapper[4922]: I0218 11:59:12.036600 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqtvp" event={"ID":"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc","Type":"ContainerDied","Data":"fe9e0b8c506169c8ff2962eae62f59ba2039b683aba9a221a651171f7491288f"} Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.430920 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.499311 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb558\" (UniqueName: \"kubernetes.io/projected/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-kube-api-access-fb558\") pod \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.499684 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-combined-ca-bundle\") pod \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.499814 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-config-data\") pod \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.499849 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-scripts\") pod \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\" (UID: \"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc\") " Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.505335 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-scripts" (OuterVolumeSpecName: "scripts") pod "b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" (UID: "b3440f4a-2f1d-4d69-aafe-ec2eb86183cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.505793 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-kube-api-access-fb558" (OuterVolumeSpecName: "kube-api-access-fb558") pod "b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" (UID: "b3440f4a-2f1d-4d69-aafe-ec2eb86183cc"). InnerVolumeSpecName "kube-api-access-fb558". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.532461 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-config-data" (OuterVolumeSpecName: "config-data") pod "b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" (UID: "b3440f4a-2f1d-4d69-aafe-ec2eb86183cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.547244 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" (UID: "b3440f4a-2f1d-4d69-aafe-ec2eb86183cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.602352 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.602640 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.602653 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:13 crc kubenswrapper[4922]: I0218 11:59:13.602665 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb558\" (UniqueName: \"kubernetes.io/projected/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc-kube-api-access-fb558\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.056289 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqtvp" event={"ID":"b3440f4a-2f1d-4d69-aafe-ec2eb86183cc","Type":"ContainerDied","Data":"55f5fe7cc5fea944498467b4885f7a182e7fdf03f39f9a7436de1aa8836d34cf"} Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.056598 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55f5fe7cc5fea944498467b4885f7a182e7fdf03f39f9a7436de1aa8836d34cf" Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.056320 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqtvp" Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.233121 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.233633 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-log" containerID="cri-o://43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422" gracePeriod=30 Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.233746 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-api" containerID="cri-o://1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94" gracePeriod=30 Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.245632 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.245891 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fe95484c-ea5d-4ea3-8915-bb6734014373" containerName="nova-scheduler-scheduler" containerID="cri-o://8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5" gracePeriod=30 Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.313109 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.313333 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-log" containerID="cri-o://b08e25eab8acf946afba995372397a3ab97e8a1f62eebeb6161f0fd5696132b0" gracePeriod=30 Feb 18 11:59:14 crc kubenswrapper[4922]: I0218 11:59:14.313486 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-metadata" containerID="cri-o://6eb028dcb7cb22f62f9b70fd21469cacb245f308ad0c1ab142649395ab9a2db8" gracePeriod=30 Feb 18 11:59:15 crc kubenswrapper[4922]: I0218 11:59:15.066615 4922 generic.go:334] "Generic (PLEG): container finished" podID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerID="b08e25eab8acf946afba995372397a3ab97e8a1f62eebeb6161f0fd5696132b0" exitCode=143 Feb 18 11:59:15 crc kubenswrapper[4922]: I0218 11:59:15.066686 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a54dd7-a74b-49c4-a631-ad8fe2c22d58","Type":"ContainerDied","Data":"b08e25eab8acf946afba995372397a3ab97e8a1f62eebeb6161f0fd5696132b0"} Feb 18 11:59:15 crc kubenswrapper[4922]: I0218 11:59:15.068405 4922 generic.go:334] "Generic (PLEG): container finished" podID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerID="43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422" exitCode=143 Feb 18 11:59:15 crc kubenswrapper[4922]: I0218 11:59:15.068433 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c0a029b-ba40-494a-b439-5ddf2073ad00","Type":"ContainerDied","Data":"43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422"} Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.017057 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.088062 4922 generic.go:334] "Generic (PLEG): container finished" podID="fe95484c-ea5d-4ea3-8915-bb6734014373" containerID="8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5" exitCode=0 Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.088111 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fe95484c-ea5d-4ea3-8915-bb6734014373","Type":"ContainerDied","Data":"8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5"} Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.088120 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.088142 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fe95484c-ea5d-4ea3-8915-bb6734014373","Type":"ContainerDied","Data":"bde417a0228740daf9a9d49c2dd39d718faec4df17eddbb3036309edba429d67"} Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.088163 4922 scope.go:117] "RemoveContainer" containerID="8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.100261 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-config-data\") pod \"fe95484c-ea5d-4ea3-8915-bb6734014373\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.100479 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-combined-ca-bundle\") pod \"fe95484c-ea5d-4ea3-8915-bb6734014373\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.100540 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcwtr\" (UniqueName: \"kubernetes.io/projected/fe95484c-ea5d-4ea3-8915-bb6734014373-kube-api-access-bcwtr\") pod \"fe95484c-ea5d-4ea3-8915-bb6734014373\" (UID: \"fe95484c-ea5d-4ea3-8915-bb6734014373\") " Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.105961 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe95484c-ea5d-4ea3-8915-bb6734014373-kube-api-access-bcwtr" (OuterVolumeSpecName: "kube-api-access-bcwtr") pod "fe95484c-ea5d-4ea3-8915-bb6734014373" (UID: "fe95484c-ea5d-4ea3-8915-bb6734014373"). InnerVolumeSpecName "kube-api-access-bcwtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.114662 4922 scope.go:117] "RemoveContainer" containerID="8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5" Feb 18 11:59:17 crc kubenswrapper[4922]: E0218 11:59:17.115611 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5\": container with ID starting with 8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5 not found: ID does not exist" containerID="8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.115701 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5"} err="failed to get container status \"8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5\": rpc error: code = NotFound desc = could not find container \"8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5\": container with ID starting with 8d90a705ba699c8182d6d1e0ea35773ad23d4e3cf41e4c0e22f8061e6e782bf5 not found: ID does not exist" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.127579 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe95484c-ea5d-4ea3-8915-bb6734014373" (UID: "fe95484c-ea5d-4ea3-8915-bb6734014373"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.137152 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-config-data" (OuterVolumeSpecName: "config-data") pod "fe95484c-ea5d-4ea3-8915-bb6734014373" (UID: "fe95484c-ea5d-4ea3-8915-bb6734014373"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.202318 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcwtr\" (UniqueName: \"kubernetes.io/projected/fe95484c-ea5d-4ea3-8915-bb6734014373-kube-api-access-bcwtr\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.202378 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.202388 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe95484c-ea5d-4ea3-8915-bb6734014373-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.436015 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.450840 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.465415 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:59:17 crc kubenswrapper[4922]: E0218 11:59:17.465872 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" containerName="nova-manage" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.465895 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" containerName="nova-manage" Feb 18 11:59:17 crc kubenswrapper[4922]: E0218 11:59:17.465917 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerName="init" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.465924 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerName="init" Feb 18 11:59:17 crc kubenswrapper[4922]: E0218 11:59:17.465935 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerName="dnsmasq-dns" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.465941 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerName="dnsmasq-dns" Feb 18 11:59:17 crc kubenswrapper[4922]: E0218 11:59:17.465958 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe95484c-ea5d-4ea3-8915-bb6734014373" containerName="nova-scheduler-scheduler" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.465964 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe95484c-ea5d-4ea3-8915-bb6734014373" containerName="nova-scheduler-scheduler" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.466136 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe07f11-0f10-4aa7-ab94-51d42b7a6367" containerName="dnsmasq-dns" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.466151 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" containerName="nova-manage" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.466160 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe95484c-ea5d-4ea3-8915-bb6734014373" containerName="nova-scheduler-scheduler" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.466785 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.469337 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.486236 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.612098 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.612157 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgb47\" (UniqueName: \"kubernetes.io/projected/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-kube-api-access-fgb47\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:17 crc kubenswrapper[4922]: I0218 11:59:17.612300 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-config-data\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:17.716385 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:17.716458 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgb47\" (UniqueName: \"kubernetes.io/projected/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-kube-api-access-fgb47\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:17.716612 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-config-data\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:17.732305 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-config-data\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:17.733440 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:17.739750 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgb47\" (UniqueName: \"kubernetes.io/projected/7319f7de-4554-4a03-ba7f-c0f414ab2fe5-kube-api-access-fgb47\") pod \"nova-scheduler-0\" (UID: \"7319f7de-4554-4a03-ba7f-c0f414ab2fe5\") " pod="openstack/nova-scheduler-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:17.799530 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:17.940646 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.022485 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-config-data\") pod \"5c0a029b-ba40-494a-b439-5ddf2073ad00\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.022521 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0a029b-ba40-494a-b439-5ddf2073ad00-logs\") pod \"5c0a029b-ba40-494a-b439-5ddf2073ad00\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.022570 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-combined-ca-bundle\") pod \"5c0a029b-ba40-494a-b439-5ddf2073ad00\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.022592 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-public-tls-certs\") pod \"5c0a029b-ba40-494a-b439-5ddf2073ad00\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.022645 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn58b\" (UniqueName: \"kubernetes.io/projected/5c0a029b-ba40-494a-b439-5ddf2073ad00-kube-api-access-wn58b\") pod \"5c0a029b-ba40-494a-b439-5ddf2073ad00\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.022829 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-internal-tls-certs\") pod \"5c0a029b-ba40-494a-b439-5ddf2073ad00\" (UID: \"5c0a029b-ba40-494a-b439-5ddf2073ad00\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.023688 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c0a029b-ba40-494a-b439-5ddf2073ad00-logs" (OuterVolumeSpecName: "logs") pod "5c0a029b-ba40-494a-b439-5ddf2073ad00" (UID: "5c0a029b-ba40-494a-b439-5ddf2073ad00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.027093 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0a029b-ba40-494a-b439-5ddf2073ad00-kube-api-access-wn58b" (OuterVolumeSpecName: "kube-api-access-wn58b") pod "5c0a029b-ba40-494a-b439-5ddf2073ad00" (UID: "5c0a029b-ba40-494a-b439-5ddf2073ad00"). InnerVolumeSpecName "kube-api-access-wn58b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.050540 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-config-data" (OuterVolumeSpecName: "config-data") pod "5c0a029b-ba40-494a-b439-5ddf2073ad00" (UID: "5c0a029b-ba40-494a-b439-5ddf2073ad00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.069452 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c0a029b-ba40-494a-b439-5ddf2073ad00" (UID: "5c0a029b-ba40-494a-b439-5ddf2073ad00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.077898 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c0a029b-ba40-494a-b439-5ddf2073ad00" (UID: "5c0a029b-ba40-494a-b439-5ddf2073ad00"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.089614 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c0a029b-ba40-494a-b439-5ddf2073ad00" (UID: "5c0a029b-ba40-494a-b439-5ddf2073ad00"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.102038 4922 generic.go:334] "Generic (PLEG): container finished" podID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerID="1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94" exitCode=0 Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.102230 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c0a029b-ba40-494a-b439-5ddf2073ad00","Type":"ContainerDied","Data":"1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94"} Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.102269 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c0a029b-ba40-494a-b439-5ddf2073ad00","Type":"ContainerDied","Data":"1e23b924bddad76596057cd410450387011174217e8c9d8f3144d75f72013eac"} Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.102298 4922 scope.go:117] "RemoveContainer" containerID="1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.102300 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.105831 4922 generic.go:334] "Generic (PLEG): container finished" podID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerID="6eb028dcb7cb22f62f9b70fd21469cacb245f308ad0c1ab142649395ab9a2db8" exitCode=0 Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.105903 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a54dd7-a74b-49c4-a631-ad8fe2c22d58","Type":"ContainerDied","Data":"6eb028dcb7cb22f62f9b70fd21469cacb245f308ad0c1ab142649395ab9a2db8"} Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.125735 4922 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.125777 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.125789 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0a029b-ba40-494a-b439-5ddf2073ad00-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.125804 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.125814 4922 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0a029b-ba40-494a-b439-5ddf2073ad00-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.125826 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn58b\" (UniqueName: \"kubernetes.io/projected/5c0a029b-ba40-494a-b439-5ddf2073ad00-kube-api-access-wn58b\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.130672 4922 scope.go:117] "RemoveContainer" containerID="43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.142262 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.157822 4922 scope.go:117] "RemoveContainer" containerID="1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94" Feb 18 11:59:18 crc kubenswrapper[4922]: E0218 11:59:18.158211 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94\": container with ID starting with 1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94 not found: ID does not exist" containerID="1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.158249 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94"} err="failed to get container status \"1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94\": rpc error: code = NotFound desc = could not find container \"1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94\": container with ID starting with 1ec2718d8714c27f5e2e142d0c677a6a64bedbb507cfb8ea954dbcb0e4cebc94 not found: ID does not exist" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.158276 4922 scope.go:117] "RemoveContainer" containerID="43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422" Feb 18 11:59:18 crc kubenswrapper[4922]: E0218 11:59:18.159155 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422\": container with ID starting with 43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422 not found: ID does not exist" containerID="43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.159183 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422"} err="failed to get container status \"43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422\": rpc error: code = NotFound desc = could not find container \"43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422\": container with ID starting with 43904287d555f659fd122c3d980c18b3522ce068f02a16e4a164632620f31422 not found: ID does not exist" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.161959 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.197948 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 11:59:18 crc kubenswrapper[4922]: E0218 11:59:18.199004 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-api" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.199021 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-api" Feb 18 11:59:18 crc kubenswrapper[4922]: E0218 11:59:18.199046 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-log" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.199053 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-log" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.199649 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-api" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.199687 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" containerName="nova-api-log" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.202181 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.206881 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.207540 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.213096 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.231882 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.329270 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-public-tls-certs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.329340 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.329409 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-config-data\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.329464 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.329733 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05385b6-6350-4ee0-b628-a1eb55dd6067-logs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.330004 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9pr\" (UniqueName: \"kubernetes.io/projected/b05385b6-6350-4ee0-b628-a1eb55dd6067-kube-api-access-ts9pr\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.432114 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05385b6-6350-4ee0-b628-a1eb55dd6067-logs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.432243 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9pr\" (UniqueName: \"kubernetes.io/projected/b05385b6-6350-4ee0-b628-a1eb55dd6067-kube-api-access-ts9pr\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.432278 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-public-tls-certs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.432318 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.432414 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-config-data\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.432455 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.435909 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.435984 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05385b6-6350-4ee0-b628-a1eb55dd6067-logs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.437592 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.437734 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-public-tls-certs\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.448793 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05385b6-6350-4ee0-b628-a1eb55dd6067-config-data\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.452918 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9pr\" (UniqueName: \"kubernetes.io/projected/b05385b6-6350-4ee0-b628-a1eb55dd6067-kube-api-access-ts9pr\") pod \"nova-api-0\" (UID: \"b05385b6-6350-4ee0-b628-a1eb55dd6067\") " pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.522636 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.853022 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.942153 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-config-data\") pod \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.942653 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm79t\" (UniqueName: \"kubernetes.io/projected/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-kube-api-access-zm79t\") pod \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.942819 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-logs\") pod \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.942944 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-combined-ca-bundle\") pod \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.943004 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-nova-metadata-tls-certs\") pod \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\" (UID: \"10a54dd7-a74b-49c4-a631-ad8fe2c22d58\") " Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.943448 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-logs" (OuterVolumeSpecName: "logs") pod "10a54dd7-a74b-49c4-a631-ad8fe2c22d58" (UID: "10a54dd7-a74b-49c4-a631-ad8fe2c22d58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.943721 4922 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-logs\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.947894 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-kube-api-access-zm79t" (OuterVolumeSpecName: "kube-api-access-zm79t") pod "10a54dd7-a74b-49c4-a631-ad8fe2c22d58" (UID: "10a54dd7-a74b-49c4-a631-ad8fe2c22d58"). InnerVolumeSpecName "kube-api-access-zm79t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.992124 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0a029b-ba40-494a-b439-5ddf2073ad00" path="/var/lib/kubelet/pods/5c0a029b-ba40-494a-b439-5ddf2073ad00/volumes" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.993216 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe95484c-ea5d-4ea3-8915-bb6734014373" path="/var/lib/kubelet/pods/fe95484c-ea5d-4ea3-8915-bb6734014373/volumes" Feb 18 11:59:18 crc kubenswrapper[4922]: I0218 11:59:18.993289 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10a54dd7-a74b-49c4-a631-ad8fe2c22d58" (UID: "10a54dd7-a74b-49c4-a631-ad8fe2c22d58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.023402 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-config-data" (OuterVolumeSpecName: "config-data") pod "10a54dd7-a74b-49c4-a631-ad8fe2c22d58" (UID: "10a54dd7-a74b-49c4-a631-ad8fe2c22d58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.030721 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 11:59:19 crc kubenswrapper[4922]: W0218 11:59:19.040176 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7319f7de_4554_4a03_ba7f_c0f414ab2fe5.slice/crio-4ffc2f4c77e305fea5a9a0096fb24c1314b1b4c09e6bae68b1b5d4f18815a440 WatchSource:0}: Error finding container 4ffc2f4c77e305fea5a9a0096fb24c1314b1b4c09e6bae68b1b5d4f18815a440: Status 404 returned error can't find the container with id 4ffc2f4c77e305fea5a9a0096fb24c1314b1b4c09e6bae68b1b5d4f18815a440 Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.040833 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "10a54dd7-a74b-49c4-a631-ad8fe2c22d58" (UID: "10a54dd7-a74b-49c4-a631-ad8fe2c22d58"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.046467 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.046520 4922 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.046540 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.046552 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm79t\" (UniqueName: \"kubernetes.io/projected/10a54dd7-a74b-49c4-a631-ad8fe2c22d58-kube-api-access-zm79t\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.099774 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 11:59:19 crc kubenswrapper[4922]: W0218 11:59:19.109640 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb05385b6_6350_4ee0_b628_a1eb55dd6067.slice/crio-cb44d789f0d429f2f75df1183a28527e1050f66ffa221b07c70145c221061862 WatchSource:0}: Error finding container cb44d789f0d429f2f75df1183a28527e1050f66ffa221b07c70145c221061862: Status 404 returned error can't find the container with id cb44d789f0d429f2f75df1183a28527e1050f66ffa221b07c70145c221061862 Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.116931 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7319f7de-4554-4a03-ba7f-c0f414ab2fe5","Type":"ContainerStarted","Data":"4ffc2f4c77e305fea5a9a0096fb24c1314b1b4c09e6bae68b1b5d4f18815a440"} Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.121086 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10a54dd7-a74b-49c4-a631-ad8fe2c22d58","Type":"ContainerDied","Data":"0eb58513d2b717b0e4ac58c75a07be6926255b4cc98ac974a0fd2d38b35e5a01"} Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.121133 4922 scope.go:117] "RemoveContainer" containerID="6eb028dcb7cb22f62f9b70fd21469cacb245f308ad0c1ab142649395ab9a2db8" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.121254 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.158880 4922 scope.go:117] "RemoveContainer" containerID="b08e25eab8acf946afba995372397a3ab97e8a1f62eebeb6161f0fd5696132b0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.195638 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.228559 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.249630 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:59:19 crc kubenswrapper[4922]: E0218 11:59:19.250158 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-metadata" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.250170 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-metadata" Feb 18 11:59:19 crc kubenswrapper[4922]: E0218 11:59:19.250198 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-log" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.250203 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-log" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.250410 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-metadata" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.250435 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-log" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.252406 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.257260 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.257769 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.263479 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.354824 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.354877 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.354899 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-logs\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.354930 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9h8\" (UniqueName: \"kubernetes.io/projected/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-kube-api-access-6w9h8\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.354988 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-config-data\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.456647 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.456692 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.456724 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-logs\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.456759 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w9h8\" (UniqueName: \"kubernetes.io/projected/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-kube-api-access-6w9h8\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.456819 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-config-data\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.457588 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-logs\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.461257 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-config-data\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.461352 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.461712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.475278 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w9h8\" (UniqueName: \"kubernetes.io/projected/3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6-kube-api-access-6w9h8\") pod \"nova-metadata-0\" (UID: \"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6\") " pod="openstack/nova-metadata-0" Feb 18 11:59:19 crc kubenswrapper[4922]: I0218 11:59:19.576973 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 11:59:20 crc kubenswrapper[4922]: I0218 11:59:20.105051 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 11:59:20 crc kubenswrapper[4922]: I0218 11:59:20.145998 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05385b6-6350-4ee0-b628-a1eb55dd6067","Type":"ContainerStarted","Data":"df28e17fccddfa92c1f512f2122f395beb47b3a065c4077ad5cd820115214663"} Feb 18 11:59:20 crc kubenswrapper[4922]: I0218 11:59:20.146169 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05385b6-6350-4ee0-b628-a1eb55dd6067","Type":"ContainerStarted","Data":"1beb1ab0db9b065ee6bfe84046297aa32e576317f296797a14ba2400f76e47cb"} Feb 18 11:59:20 crc kubenswrapper[4922]: I0218 11:59:20.146280 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b05385b6-6350-4ee0-b628-a1eb55dd6067","Type":"ContainerStarted","Data":"cb44d789f0d429f2f75df1183a28527e1050f66ffa221b07c70145c221061862"} Feb 18 11:59:20 crc kubenswrapper[4922]: I0218 11:59:20.175988 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7319f7de-4554-4a03-ba7f-c0f414ab2fe5","Type":"ContainerStarted","Data":"20cc532910fcb241e32cbde76cb1ec428f2c79cb4dc17c5a1555717b24a957b1"} Feb 18 11:59:20 crc kubenswrapper[4922]: I0218 11:59:20.199725 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6","Type":"ContainerStarted","Data":"ac289d5ab2fba79d8d93271e7c9b0948061fceda0f37a1c54e50187cd24f7843"} Feb 18 11:59:20 crc kubenswrapper[4922]: I0218 11:59:20.221465 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.221447782 podStartE2EDuration="2.221447782s" podCreationTimestamp="2026-02-18 11:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:59:20.220063526 +0000 UTC m=+1361.947767606" watchObservedRunningTime="2026-02-18 11:59:20.221447782 +0000 UTC m=+1361.949151862" Feb 18 11:59:20 crc kubenswrapper[4922]: I0218 11:59:20.253976 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.253957476 podStartE2EDuration="3.253957476s" podCreationTimestamp="2026-02-18 11:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:59:20.246905887 +0000 UTC m=+1361.974609967" watchObservedRunningTime="2026-02-18 11:59:20.253957476 +0000 UTC m=+1361.981661556" Feb 18 11:59:21 crc kubenswrapper[4922]: I0218 11:59:21.013603 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" path="/var/lib/kubelet/pods/10a54dd7-a74b-49c4-a631-ad8fe2c22d58/volumes" Feb 18 11:59:21 crc kubenswrapper[4922]: I0218 11:59:21.222856 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6","Type":"ContainerStarted","Data":"c1ba3779a1cb1ede99b0f035434b810f749eee065544b58c9396282255ad7ea7"} Feb 18 11:59:21 crc kubenswrapper[4922]: I0218 11:59:21.222910 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6","Type":"ContainerStarted","Data":"b3ff55e981f818d10524f9732c0dcc9a693963171f691d48b912e51dffe1d8bc"} Feb 18 11:59:21 crc kubenswrapper[4922]: I0218 11:59:21.253591 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.253568714 podStartE2EDuration="2.253568714s" podCreationTimestamp="2026-02-18 11:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:59:21.238551783 +0000 UTC m=+1362.966255863" watchObservedRunningTime="2026-02-18 11:59:21.253568714 +0000 UTC m=+1362.981272804" Feb 18 11:59:22 crc kubenswrapper[4922]: I0218 11:59:22.799788 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 11:59:23 crc kubenswrapper[4922]: I0218 11:59:23.071410 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:59:23 crc kubenswrapper[4922]: I0218 11:59:23.071433 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="10a54dd7-a74b-49c4-a631-ad8fe2c22d58" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:59:24 crc kubenswrapper[4922]: I0218 11:59:24.577649 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 11:59:24 crc kubenswrapper[4922]: I0218 11:59:24.577971 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 11:59:27 crc kubenswrapper[4922]: I0218 11:59:27.799959 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 11:59:27 crc kubenswrapper[4922]: I0218 11:59:27.836172 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 11:59:28 crc kubenswrapper[4922]: I0218 11:59:28.325909 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 11:59:28 crc kubenswrapper[4922]: I0218 11:59:28.523074 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 11:59:28 crc kubenswrapper[4922]: I0218 11:59:28.523421 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 11:59:29 crc kubenswrapper[4922]: I0218 11:59:29.536536 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b05385b6-6350-4ee0-b628-a1eb55dd6067" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 11:59:29 crc kubenswrapper[4922]: I0218 11:59:29.536572 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b05385b6-6350-4ee0-b628-a1eb55dd6067" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 11:59:29 crc kubenswrapper[4922]: I0218 11:59:29.577477 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 11:59:29 crc kubenswrapper[4922]: I0218 11:59:29.578938 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 11:59:30 crc kubenswrapper[4922]: I0218 11:59:30.588610 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 11:59:30 crc kubenswrapper[4922]: I0218 11:59:30.588623 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 11:59:31 crc kubenswrapper[4922]: I0218 11:59:31.484278 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 11:59:35 crc kubenswrapper[4922]: I0218 11:59:35.079600 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:59:35 crc kubenswrapper[4922]: I0218 11:59:35.080183 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2aa305a0-c015-43c2-851c-8eff778238be" containerName="kube-state-metrics" containerID="cri-o://c04aa8b95a26312fcc998c31a3b946f55c4ecd9e671bf072302577040fa7d761" gracePeriod=30 Feb 18 11:59:35 crc kubenswrapper[4922]: I0218 11:59:35.362185 4922 generic.go:334] "Generic (PLEG): container finished" podID="2aa305a0-c015-43c2-851c-8eff778238be" containerID="c04aa8b95a26312fcc998c31a3b946f55c4ecd9e671bf072302577040fa7d761" exitCode=2 Feb 18 11:59:35 crc kubenswrapper[4922]: I0218 11:59:35.362299 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2aa305a0-c015-43c2-851c-8eff778238be","Type":"ContainerDied","Data":"c04aa8b95a26312fcc998c31a3b946f55c4ecd9e671bf072302577040fa7d761"} Feb 18 11:59:35 crc kubenswrapper[4922]: I0218 11:59:35.602528 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 11:59:35 crc kubenswrapper[4922]: I0218 11:59:35.682782 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhpfc\" (UniqueName: \"kubernetes.io/projected/2aa305a0-c015-43c2-851c-8eff778238be-kube-api-access-hhpfc\") pod \"2aa305a0-c015-43c2-851c-8eff778238be\" (UID: \"2aa305a0-c015-43c2-851c-8eff778238be\") " Feb 18 11:59:35 crc kubenswrapper[4922]: I0218 11:59:35.689345 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa305a0-c015-43c2-851c-8eff778238be-kube-api-access-hhpfc" (OuterVolumeSpecName: "kube-api-access-hhpfc") pod "2aa305a0-c015-43c2-851c-8eff778238be" (UID: "2aa305a0-c015-43c2-851c-8eff778238be"). InnerVolumeSpecName "kube-api-access-hhpfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:35 crc kubenswrapper[4922]: I0218 11:59:35.785423 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhpfc\" (UniqueName: \"kubernetes.io/projected/2aa305a0-c015-43c2-851c-8eff778238be-kube-api-access-hhpfc\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.377100 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2aa305a0-c015-43c2-851c-8eff778238be","Type":"ContainerDied","Data":"3a3d098eed640f36965fdadc7b1dd0c83929950b22d8057eb96a4ca71c50bd14"} Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.377152 4922 scope.go:117] "RemoveContainer" containerID="c04aa8b95a26312fcc998c31a3b946f55c4ecd9e671bf072302577040fa7d761" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.378391 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.415133 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.423572 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.442493 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:59:36 crc kubenswrapper[4922]: E0218 11:59:36.442868 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa305a0-c015-43c2-851c-8eff778238be" containerName="kube-state-metrics" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.442884 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa305a0-c015-43c2-851c-8eff778238be" containerName="kube-state-metrics" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.443081 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa305a0-c015-43c2-851c-8eff778238be" containerName="kube-state-metrics" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.443692 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.445553 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.447702 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.476089 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.498541 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.498617 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.498671 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.498764 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8pc\" (UniqueName: \"kubernetes.io/projected/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-api-access-vn8pc\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.601409 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.601527 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.601589 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.601753 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8pc\" (UniqueName: \"kubernetes.io/projected/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-api-access-vn8pc\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.610266 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.618883 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.629749 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.634191 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8pc\" (UniqueName: \"kubernetes.io/projected/1b492a6f-c8fc-4a76-8645-9f94a29d5e6b-kube-api-access-vn8pc\") pod \"kube-state-metrics-0\" (UID: \"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b\") " pod="openstack/kube-state-metrics-0" Feb 18 11:59:36 crc kubenswrapper[4922]: I0218 11:59:36.780030 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 11:59:37 crc kubenswrapper[4922]: I0218 11:59:37.034317 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa305a0-c015-43c2-851c-8eff778238be" path="/var/lib/kubelet/pods/2aa305a0-c015-43c2-851c-8eff778238be/volumes" Feb 18 11:59:37 crc kubenswrapper[4922]: I0218 11:59:37.277324 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:37 crc kubenswrapper[4922]: I0218 11:59:37.278019 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="ceilometer-central-agent" containerID="cri-o://a917c6100631e0cecbb43e73dcdd3017b9ee5fcde8f4b2eb40ffa1cf7a6ddd61" gracePeriod=30 Feb 18 11:59:37 crc kubenswrapper[4922]: I0218 11:59:37.278038 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="proxy-httpd" containerID="cri-o://230a0bc54b8dd07f66eb86ad2215c814c3b880945116c4dab8556a97fba068aa" gracePeriod=30 Feb 18 11:59:37 crc kubenswrapper[4922]: I0218 11:59:37.278094 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="sg-core" containerID="cri-o://4e1f36256af229f8391ab980166248080c950f7ddf5a27ccd1a51d73b649da7a" gracePeriod=30 Feb 18 11:59:37 crc kubenswrapper[4922]: I0218 11:59:37.278180 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="ceilometer-notification-agent" containerID="cri-o://c0b1ec47222cf49451bd1b033a64e0ee80b9aa5aed82b2529850f17df47b4532" gracePeriod=30 Feb 18 11:59:37 crc kubenswrapper[4922]: I0218 11:59:37.297005 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 11:59:37 crc kubenswrapper[4922]: W0218 11:59:37.299631 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b492a6f_c8fc_4a76_8645_9f94a29d5e6b.slice/crio-73bd1d1d00a22d2a791aab7c6ffd7b5d9ff1ba7174a16f36465d2da663434a86 WatchSource:0}: Error finding container 73bd1d1d00a22d2a791aab7c6ffd7b5d9ff1ba7174a16f36465d2da663434a86: Status 404 returned error can't find the container with id 73bd1d1d00a22d2a791aab7c6ffd7b5d9ff1ba7174a16f36465d2da663434a86 Feb 18 11:59:37 crc kubenswrapper[4922]: I0218 11:59:37.391329 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b","Type":"ContainerStarted","Data":"73bd1d1d00a22d2a791aab7c6ffd7b5d9ff1ba7174a16f36465d2da663434a86"} Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.405149 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerID="230a0bc54b8dd07f66eb86ad2215c814c3b880945116c4dab8556a97fba068aa" exitCode=0 Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.405483 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerID="4e1f36256af229f8391ab980166248080c950f7ddf5a27ccd1a51d73b649da7a" exitCode=2 Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.405498 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerID="a917c6100631e0cecbb43e73dcdd3017b9ee5fcde8f4b2eb40ffa1cf7a6ddd61" exitCode=0 Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.405249 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerDied","Data":"230a0bc54b8dd07f66eb86ad2215c814c3b880945116c4dab8556a97fba068aa"} Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.405544 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerDied","Data":"4e1f36256af229f8391ab980166248080c950f7ddf5a27ccd1a51d73b649da7a"} Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.405570 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerDied","Data":"a917c6100631e0cecbb43e73dcdd3017b9ee5fcde8f4b2eb40ffa1cf7a6ddd61"} Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.532209 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.532643 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.542860 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 11:59:38 crc kubenswrapper[4922]: I0218 11:59:38.546604 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 11:59:39 crc kubenswrapper[4922]: I0218 11:59:39.418226 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1b492a6f-c8fc-4a76-8645-9f94a29d5e6b","Type":"ContainerStarted","Data":"ab9c779636aeab10119c3cbb2558f0699003e506990c48dfde5601fb4b98b651"} Feb 18 11:59:39 crc kubenswrapper[4922]: I0218 11:59:39.418517 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 11:59:39 crc kubenswrapper[4922]: I0218 11:59:39.419220 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 11:59:39 crc kubenswrapper[4922]: I0218 11:59:39.438140 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 11:59:39 crc kubenswrapper[4922]: I0218 11:59:39.443146 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.948916229 podStartE2EDuration="3.443123388s" podCreationTimestamp="2026-02-18 11:59:36 +0000 UTC" firstStartedPulling="2026-02-18 11:59:37.301969064 +0000 UTC m=+1379.029673144" lastFinishedPulling="2026-02-18 11:59:38.796176223 +0000 UTC m=+1380.523880303" observedRunningTime="2026-02-18 11:59:39.432041847 +0000 UTC m=+1381.159745927" watchObservedRunningTime="2026-02-18 11:59:39.443123388 +0000 UTC m=+1381.170827468" Feb 18 11:59:39 crc kubenswrapper[4922]: I0218 11:59:39.588986 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 11:59:39 crc kubenswrapper[4922]: I0218 11:59:39.594799 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 11:59:39 crc kubenswrapper[4922]: I0218 11:59:39.596814 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.432137 4922 generic.go:334] "Generic (PLEG): container finished" podID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerID="c0b1ec47222cf49451bd1b033a64e0ee80b9aa5aed82b2529850f17df47b4532" exitCode=0 Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.432189 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerDied","Data":"c0b1ec47222cf49451bd1b033a64e0ee80b9aa5aed82b2529850f17df47b4532"} Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.438689 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.799800 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.807960 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-log-httpd\") pod \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.808014 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-config-data\") pod \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.808071 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-run-httpd\") pod \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.808140 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-sg-core-conf-yaml\") pod \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.808172 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h25l\" (UniqueName: \"kubernetes.io/projected/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-kube-api-access-8h25l\") pod \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.808227 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-combined-ca-bundle\") pod \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.808281 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-scripts\") pod \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\" (UID: \"e3b8f165-b92e-47d4-ada4-5eee351d6a5a\") " Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.809531 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3b8f165-b92e-47d4-ada4-5eee351d6a5a" (UID: "e3b8f165-b92e-47d4-ada4-5eee351d6a5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.809848 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3b8f165-b92e-47d4-ada4-5eee351d6a5a" (UID: "e3b8f165-b92e-47d4-ada4-5eee351d6a5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.815789 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-kube-api-access-8h25l" (OuterVolumeSpecName: "kube-api-access-8h25l") pod "e3b8f165-b92e-47d4-ada4-5eee351d6a5a" (UID: "e3b8f165-b92e-47d4-ada4-5eee351d6a5a"). InnerVolumeSpecName "kube-api-access-8h25l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.826530 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-scripts" (OuterVolumeSpecName: "scripts") pod "e3b8f165-b92e-47d4-ada4-5eee351d6a5a" (UID: "e3b8f165-b92e-47d4-ada4-5eee351d6a5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.887174 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3b8f165-b92e-47d4-ada4-5eee351d6a5a" (UID: "e3b8f165-b92e-47d4-ada4-5eee351d6a5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.911630 4922 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.911667 4922 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.911681 4922 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.911695 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h25l\" (UniqueName: \"kubernetes.io/projected/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-kube-api-access-8h25l\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.911706 4922 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.934708 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3b8f165-b92e-47d4-ada4-5eee351d6a5a" (UID: "e3b8f165-b92e-47d4-ada4-5eee351d6a5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:40 crc kubenswrapper[4922]: I0218 11:59:40.972683 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-config-data" (OuterVolumeSpecName: "config-data") pod "e3b8f165-b92e-47d4-ada4-5eee351d6a5a" (UID: "e3b8f165-b92e-47d4-ada4-5eee351d6a5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.014051 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.014089 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b8f165-b92e-47d4-ada4-5eee351d6a5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.443704 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3b8f165-b92e-47d4-ada4-5eee351d6a5a","Type":"ContainerDied","Data":"f48c4f106861c2b0fbbd89f3ec527e011b85b43938cee7887c256abc61904ff7"} Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.443749 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.443767 4922 scope.go:117] "RemoveContainer" containerID="230a0bc54b8dd07f66eb86ad2215c814c3b880945116c4dab8556a97fba068aa" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.466566 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.485672 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.489518 4922 scope.go:117] "RemoveContainer" containerID="4e1f36256af229f8391ab980166248080c950f7ddf5a27ccd1a51d73b649da7a" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.503154 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:41 crc kubenswrapper[4922]: E0218 11:59:41.503668 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="ceilometer-central-agent" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.503689 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="ceilometer-central-agent" Feb 18 11:59:41 crc kubenswrapper[4922]: E0218 11:59:41.503704 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="proxy-httpd" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.503710 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="proxy-httpd" Feb 18 11:59:41 crc kubenswrapper[4922]: E0218 11:59:41.503734 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="sg-core" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.503742 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="sg-core" Feb 18 11:59:41 crc kubenswrapper[4922]: E0218 11:59:41.503767 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="ceilometer-notification-agent" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.503773 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="ceilometer-notification-agent" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.503972 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="ceilometer-notification-agent" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.503991 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="ceilometer-central-agent" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.504000 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="sg-core" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.504017 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" containerName="proxy-httpd" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.506164 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.509655 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.509986 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.511793 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.514840 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.515034 4922 scope.go:117] "RemoveContainer" containerID="c0b1ec47222cf49451bd1b033a64e0ee80b9aa5aed82b2529850f17df47b4532" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.521571 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.521656 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.521697 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-config-data\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.521744 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.521771 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw462\" (UniqueName: \"kubernetes.io/projected/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-kube-api-access-tw462\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.521872 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.521932 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-scripts\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.521983 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.546557 4922 scope.go:117] "RemoveContainer" containerID="a917c6100631e0cecbb43e73dcdd3017b9ee5fcde8f4b2eb40ffa1cf7a6ddd61" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.623160 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.623207 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw462\" (UniqueName: \"kubernetes.io/projected/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-kube-api-access-tw462\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.623282 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.623312 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-scripts\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.623341 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.623401 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.623653 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.623694 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-config-data\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.624044 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.624389 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.628893 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.630426 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.634687 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-config-data\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.636634 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-scripts\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.645345 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw462\" (UniqueName: \"kubernetes.io/projected/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-kube-api-access-tw462\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.659604 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc3cdcf-4513-4e18-8d43-c435fd877ae7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc3cdcf-4513-4e18-8d43-c435fd877ae7\") " pod="openstack/ceilometer-0" Feb 18 11:59:41 crc kubenswrapper[4922]: I0218 11:59:41.820925 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 11:59:42 crc kubenswrapper[4922]: W0218 11:59:42.322022 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfc3cdcf_4513_4e18_8d43_c435fd877ae7.slice/crio-3fd39719d8855e2f91756da051f1d588a70b474048d8acaa977554ff1f3d9420 WatchSource:0}: Error finding container 3fd39719d8855e2f91756da051f1d588a70b474048d8acaa977554ff1f3d9420: Status 404 returned error can't find the container with id 3fd39719d8855e2f91756da051f1d588a70b474048d8acaa977554ff1f3d9420 Feb 18 11:59:42 crc kubenswrapper[4922]: I0218 11:59:42.322379 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 11:59:42 crc kubenswrapper[4922]: I0218 11:59:42.458426 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc3cdcf-4513-4e18-8d43-c435fd877ae7","Type":"ContainerStarted","Data":"3fd39719d8855e2f91756da051f1d588a70b474048d8acaa977554ff1f3d9420"} Feb 18 11:59:42 crc kubenswrapper[4922]: I0218 11:59:42.993464 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b8f165-b92e-47d4-ada4-5eee351d6a5a" path="/var/lib/kubelet/pods/e3b8f165-b92e-47d4-ada4-5eee351d6a5a/volumes" Feb 18 11:59:43 crc kubenswrapper[4922]: I0218 11:59:43.469991 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc3cdcf-4513-4e18-8d43-c435fd877ae7","Type":"ContainerStarted","Data":"d190cb0679446d5880e640741f917ecaff38dfe1ba9ef2c8f4d95c17496508e6"} Feb 18 11:59:45 crc kubenswrapper[4922]: I0218 11:59:45.491205 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc3cdcf-4513-4e18-8d43-c435fd877ae7","Type":"ContainerStarted","Data":"14ab6968b09de88f6b954ec95726553215462b06544acf3a2c7829a7ad204e89"} Feb 18 11:59:46 crc kubenswrapper[4922]: I0218 11:59:46.797385 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 11:59:47 crc kubenswrapper[4922]: I0218 11:59:47.513814 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc3cdcf-4513-4e18-8d43-c435fd877ae7","Type":"ContainerStarted","Data":"dda346ed1aec44db04feee05d66d67fced8a78e5b16d2764dd2e01b3cd7eab87"} Feb 18 11:59:50 crc kubenswrapper[4922]: I0218 11:59:50.553101 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc3cdcf-4513-4e18-8d43-c435fd877ae7","Type":"ContainerStarted","Data":"b3ac14f8c3f4f10ff1101938d96e0428be00ec2f23a4dc040a5a0dfd7a403a26"} Feb 18 11:59:50 crc kubenswrapper[4922]: I0218 11:59:50.553701 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 11:59:50 crc kubenswrapper[4922]: I0218 11:59:50.573122 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.265204988 podStartE2EDuration="9.573100568s" podCreationTimestamp="2026-02-18 11:59:41 +0000 UTC" firstStartedPulling="2026-02-18 11:59:42.324222986 +0000 UTC m=+1384.051927066" lastFinishedPulling="2026-02-18 11:59:49.632118566 +0000 UTC m=+1391.359822646" observedRunningTime="2026-02-18 11:59:50.572923423 +0000 UTC m=+1392.300627523" watchObservedRunningTime="2026-02-18 11:59:50.573100568 +0000 UTC m=+1392.300804648" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.154212 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97"] Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.156497 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.158255 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.166524 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.169471 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97"] Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.297690 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee2dabc9-c094-41a8-8efd-7b113f5c634c-config-volume\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.297801 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee2dabc9-c094-41a8-8efd-7b113f5c634c-secret-volume\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.298195 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgltn\" (UniqueName: \"kubernetes.io/projected/ee2dabc9-c094-41a8-8efd-7b113f5c634c-kube-api-access-fgltn\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.401135 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgltn\" (UniqueName: \"kubernetes.io/projected/ee2dabc9-c094-41a8-8efd-7b113f5c634c-kube-api-access-fgltn\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.401324 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee2dabc9-c094-41a8-8efd-7b113f5c634c-config-volume\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.401355 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee2dabc9-c094-41a8-8efd-7b113f5c634c-secret-volume\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.402383 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee2dabc9-c094-41a8-8efd-7b113f5c634c-config-volume\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.409696 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee2dabc9-c094-41a8-8efd-7b113f5c634c-secret-volume\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.435569 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgltn\" (UniqueName: \"kubernetes.io/projected/ee2dabc9-c094-41a8-8efd-7b113f5c634c-kube-api-access-fgltn\") pod \"collect-profiles-29523600-9hg97\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.483157 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:00 crc kubenswrapper[4922]: I0218 12:00:00.954012 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97"] Feb 18 12:00:01 crc kubenswrapper[4922]: I0218 12:00:01.654207 4922 generic.go:334] "Generic (PLEG): container finished" podID="ee2dabc9-c094-41a8-8efd-7b113f5c634c" containerID="8bef9aa4b92aba91322be4b15768a495bfe0d2b031bccfdb47f0999ccd8a7508" exitCode=0 Feb 18 12:00:01 crc kubenswrapper[4922]: I0218 12:00:01.654254 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" event={"ID":"ee2dabc9-c094-41a8-8efd-7b113f5c634c","Type":"ContainerDied","Data":"8bef9aa4b92aba91322be4b15768a495bfe0d2b031bccfdb47f0999ccd8a7508"} Feb 18 12:00:01 crc kubenswrapper[4922]: I0218 12:00:01.655364 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" event={"ID":"ee2dabc9-c094-41a8-8efd-7b113f5c634c","Type":"ContainerStarted","Data":"79b3ffcfd2518bc6578364c9c8a0c4c14abd73b2e873e3bed532e4d07863d8c0"} Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.048912 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.156500 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee2dabc9-c094-41a8-8efd-7b113f5c634c-config-volume\") pod \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.156829 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgltn\" (UniqueName: \"kubernetes.io/projected/ee2dabc9-c094-41a8-8efd-7b113f5c634c-kube-api-access-fgltn\") pod \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.157289 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee2dabc9-c094-41a8-8efd-7b113f5c634c-secret-volume\") pod \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\" (UID: \"ee2dabc9-c094-41a8-8efd-7b113f5c634c\") " Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.157336 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee2dabc9-c094-41a8-8efd-7b113f5c634c-config-volume" (OuterVolumeSpecName: "config-volume") pod "ee2dabc9-c094-41a8-8efd-7b113f5c634c" (UID: "ee2dabc9-c094-41a8-8efd-7b113f5c634c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.159311 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee2dabc9-c094-41a8-8efd-7b113f5c634c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.162834 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2dabc9-c094-41a8-8efd-7b113f5c634c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ee2dabc9-c094-41a8-8efd-7b113f5c634c" (UID: "ee2dabc9-c094-41a8-8efd-7b113f5c634c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.162841 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2dabc9-c094-41a8-8efd-7b113f5c634c-kube-api-access-fgltn" (OuterVolumeSpecName: "kube-api-access-fgltn") pod "ee2dabc9-c094-41a8-8efd-7b113f5c634c" (UID: "ee2dabc9-c094-41a8-8efd-7b113f5c634c"). InnerVolumeSpecName "kube-api-access-fgltn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.260914 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgltn\" (UniqueName: \"kubernetes.io/projected/ee2dabc9-c094-41a8-8efd-7b113f5c634c-kube-api-access-fgltn\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.260965 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee2dabc9-c094-41a8-8efd-7b113f5c634c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.672116 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" event={"ID":"ee2dabc9-c094-41a8-8efd-7b113f5c634c","Type":"ContainerDied","Data":"79b3ffcfd2518bc6578364c9c8a0c4c14abd73b2e873e3bed532e4d07863d8c0"} Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.672152 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b3ffcfd2518bc6578364c9c8a0c4c14abd73b2e873e3bed532e4d07863d8c0" Feb 18 12:00:03 crc kubenswrapper[4922]: I0218 12:00:03.672153 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97" Feb 18 12:00:09 crc kubenswrapper[4922]: I0218 12:00:09.345848 4922 scope.go:117] "RemoveContainer" containerID="f9617d9b57da4e95bb7eb7f0412ba485b6082bcd962a46f87e7d295e47d23bfb" Feb 18 12:00:09 crc kubenswrapper[4922]: I0218 12:00:09.811018 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:00:09 crc kubenswrapper[4922]: I0218 12:00:09.811654 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:00:11 crc kubenswrapper[4922]: I0218 12:00:11.832228 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 12:00:21 crc kubenswrapper[4922]: I0218 12:00:21.201021 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:00:22 crc kubenswrapper[4922]: I0218 12:00:22.116047 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:00:25 crc kubenswrapper[4922]: I0218 12:00:25.686021 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerName="rabbitmq" containerID="cri-o://745e13628b88d2d6249c23c49d872e4f0c6a229af5e85aec709bffe789a60191" gracePeriod=604796 Feb 18 12:00:26 crc kubenswrapper[4922]: I0218 12:00:26.958474 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerName="rabbitmq" containerID="cri-o://ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3" gracePeriod=604796 Feb 18 12:00:28 crc kubenswrapper[4922]: I0218 12:00:28.525453 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 18 12:00:28 crc kubenswrapper[4922]: I0218 12:00:28.871344 4922 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 18 12:00:31 crc kubenswrapper[4922]: I0218 12:00:31.974193 4922 generic.go:334] "Generic (PLEG): container finished" podID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerID="745e13628b88d2d6249c23c49d872e4f0c6a229af5e85aec709bffe789a60191" exitCode=0 Feb 18 12:00:31 crc kubenswrapper[4922]: I0218 12:00:31.974825 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cef557d2-b935-4cf6-98f1-d3c2251c0e38","Type":"ContainerDied","Data":"745e13628b88d2d6249c23c49d872e4f0c6a229af5e85aec709bffe789a60191"} Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.256653 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330041 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-server-conf\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330464 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gvvw\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-kube-api-access-9gvvw\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330581 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-plugins\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330681 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-plugins-conf\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330717 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330787 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-erlang-cookie\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330831 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-config-data\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330870 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cef557d2-b935-4cf6-98f1-d3c2251c0e38-erlang-cookie-secret\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.330952 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-confd\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.331000 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-tls\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.331037 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cef557d2-b935-4cf6-98f1-d3c2251c0e38-pod-info\") pod \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\" (UID: \"cef557d2-b935-4cf6-98f1-d3c2251c0e38\") " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.332749 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.335852 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.336228 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.344014 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.347561 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef557d2-b935-4cf6-98f1-d3c2251c0e38-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.349716 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cef557d2-b935-4cf6-98f1-d3c2251c0e38-pod-info" (OuterVolumeSpecName: "pod-info") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.355865 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-kube-api-access-9gvvw" (OuterVolumeSpecName: "kube-api-access-9gvvw") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "kube-api-access-9gvvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.358425 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.392567 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-config-data" (OuterVolumeSpecName: "config-data") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433414 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-server-conf" (OuterVolumeSpecName: "server-conf") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433529 4922 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433567 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433578 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433591 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433600 4922 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cef557d2-b935-4cf6-98f1-d3c2251c0e38-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433609 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433617 4922 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cef557d2-b935-4cf6-98f1-d3c2251c0e38-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433626 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gvvw\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-kube-api-access-9gvvw\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.433634 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.459483 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.506183 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cef557d2-b935-4cf6-98f1-d3c2251c0e38" (UID: "cef557d2-b935-4cf6-98f1-d3c2251c0e38"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.535809 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.535843 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cef557d2-b935-4cf6-98f1-d3c2251c0e38-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:32 crc kubenswrapper[4922]: I0218 12:00:32.535854 4922 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cef557d2-b935-4cf6-98f1-d3c2251c0e38-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.002990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cef557d2-b935-4cf6-98f1-d3c2251c0e38","Type":"ContainerDied","Data":"d9788f1e654fa9ba3ad3f0a6ae9798137af27ad57e7e68121b8391b0725d166c"} Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.003043 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.003051 4922 scope.go:117] "RemoveContainer" containerID="745e13628b88d2d6249c23c49d872e4f0c6a229af5e85aec709bffe789a60191" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.035701 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.040505 4922 scope.go:117] "RemoveContainer" containerID="fcebb6698c6e4874e1a9ec8fbf1e0b1f0b32b5ba69a663e4d66216e0e480bd70" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.044748 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.065968 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:00:33 crc kubenswrapper[4922]: E0218 12:00:33.066669 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerName="rabbitmq" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.066689 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerName="rabbitmq" Feb 18 12:00:33 crc kubenswrapper[4922]: E0218 12:00:33.066713 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerName="setup-container" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.066720 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerName="setup-container" Feb 18 12:00:33 crc kubenswrapper[4922]: E0218 12:00:33.066740 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2dabc9-c094-41a8-8efd-7b113f5c634c" containerName="collect-profiles" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.066746 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2dabc9-c094-41a8-8efd-7b113f5c634c" containerName="collect-profiles" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.066912 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" containerName="rabbitmq" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.066926 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2dabc9-c094-41a8-8efd-7b113f5c634c" containerName="collect-profiles" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.068416 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.070421 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.071476 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.071516 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ctw5n" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.071652 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.071681 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.071798 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.071889 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.094104 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.151840 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.151888 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.151909 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.151928 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb934d91-0203-48d1-be6a-ab13e821993d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.151973 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.151993 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.152010 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb934d91-0203-48d1-be6a-ab13e821993d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.152033 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.152064 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.152087 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb55s\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-kube-api-access-jb55s\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.152144 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254017 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254129 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254171 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254191 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254216 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb934d91-0203-48d1-be6a-ab13e821993d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254280 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254310 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254335 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb934d91-0203-48d1-be6a-ab13e821993d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254389 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254427 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.254455 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb55s\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-kube-api-access-jb55s\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.255276 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.255476 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.255504 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.255668 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.256192 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.256271 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb934d91-0203-48d1-be6a-ab13e821993d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.260378 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.263556 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.270059 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb934d91-0203-48d1-be6a-ab13e821993d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.277323 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb934d91-0203-48d1-be6a-ab13e821993d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.277537 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb55s\" (UniqueName: \"kubernetes.io/projected/bb934d91-0203-48d1-be6a-ab13e821993d-kube-api-access-jb55s\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.325762 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"bb934d91-0203-48d1-be6a-ab13e821993d\") " pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.440645 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.531886 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.564487 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9xcj\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-kube-api-access-d9xcj\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.564782 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12b84523-522e-4e8c-b78e-0094262fb1f8-erlang-cookie-secret\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.564860 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-tls\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.564898 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-erlang-cookie\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.564926 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12b84523-522e-4e8c-b78e-0094262fb1f8-pod-info\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.564963 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-plugins-conf\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.565000 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-confd\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.565016 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-config-data\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.565061 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-server-conf\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.565102 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-plugins\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.565188 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"12b84523-522e-4e8c-b78e-0094262fb1f8\" (UID: \"12b84523-522e-4e8c-b78e-0094262fb1f8\") " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.567025 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.573087 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.574511 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b84523-522e-4e8c-b78e-0094262fb1f8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.577037 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-kube-api-access-d9xcj" (OuterVolumeSpecName: "kube-api-access-d9xcj") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "kube-api-access-d9xcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.578630 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/12b84523-522e-4e8c-b78e-0094262fb1f8-pod-info" (OuterVolumeSpecName: "pod-info") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.579100 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.579892 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.580744 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.628708 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-config-data" (OuterVolumeSpecName: "config-data") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721311 4922 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12b84523-522e-4e8c-b78e-0094262fb1f8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721351 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721403 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721438 4922 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12b84523-522e-4e8c-b78e-0094262fb1f8-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721447 4922 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721456 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721467 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721496 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.721505 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9xcj\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-kube-api-access-d9xcj\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.753444 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-server-conf" (OuterVolumeSpecName: "server-conf") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.759380 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.788094 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "12b84523-522e-4e8c-b78e-0094262fb1f8" (UID: "12b84523-522e-4e8c-b78e-0094262fb1f8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.823655 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.823687 4922 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12b84523-522e-4e8c-b78e-0094262fb1f8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.823698 4922 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12b84523-522e-4e8c-b78e-0094262fb1f8-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:33 crc kubenswrapper[4922]: I0218 12:00:33.971934 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.018008 4922 generic.go:334] "Generic (PLEG): container finished" podID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerID="ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3" exitCode=0 Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.018122 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.018116 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12b84523-522e-4e8c-b78e-0094262fb1f8","Type":"ContainerDied","Data":"ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3"} Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.019477 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"12b84523-522e-4e8c-b78e-0094262fb1f8","Type":"ContainerDied","Data":"aab8e1d8bb4c1667bc6b73808bdb819ba395465155e9f67195316f9044955cf6"} Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.019533 4922 scope.go:117] "RemoveContainer" containerID="ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.027560 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb934d91-0203-48d1-be6a-ab13e821993d","Type":"ContainerStarted","Data":"feb4e60334b86b208e674d9d4c6478a43e10f01e123da2c32ceacc5e936672dc"} Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.106319 4922 scope.go:117] "RemoveContainer" containerID="e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.153453 4922 scope.go:117] "RemoveContainer" containerID="ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3" Feb 18 12:00:34 crc kubenswrapper[4922]: E0218 12:00:34.154328 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3\": container with ID starting with ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3 not found: ID does not exist" containerID="ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.154387 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3"} err="failed to get container status \"ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3\": rpc error: code = NotFound desc = could not find container \"ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3\": container with ID starting with ebcec927014698c8dd3e3cce0832985c8aae1c112a06d773a543bb68530768f3 not found: ID does not exist" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.154417 4922 scope.go:117] "RemoveContainer" containerID="e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a" Feb 18 12:00:34 crc kubenswrapper[4922]: E0218 12:00:34.154810 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a\": container with ID starting with e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a not found: ID does not exist" containerID="e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.154829 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a"} err="failed to get container status \"e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a\": rpc error: code = NotFound desc = could not find container \"e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a\": container with ID starting with e2215857f67c60d9f5c91d2142e0be2f147282bd8c00e873605b8bd17b7df49a not found: ID does not exist" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.160447 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.169760 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.195498 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:00:34 crc kubenswrapper[4922]: E0218 12:00:34.195904 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerName="rabbitmq" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.195920 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerName="rabbitmq" Feb 18 12:00:34 crc kubenswrapper[4922]: E0218 12:00:34.195954 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerName="setup-container" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.195960 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerName="setup-container" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.196149 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b84523-522e-4e8c-b78e-0094262fb1f8" containerName="rabbitmq" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.197162 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.198606 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.198669 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.199012 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.199144 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.199289 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.200074 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8fwmc" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.200582 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.205509 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.302203 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kjhlh"] Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.304495 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.322185 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjhlh"] Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.354825 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.354883 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355041 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9eb7dcb0-20c5-414c-bc86-58461654bcb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355157 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355219 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355463 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9eb7dcb0-20c5-414c-bc86-58461654bcb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355536 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355565 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szj9q\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-kube-api-access-szj9q\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355604 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355791 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.355895 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457393 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9eb7dcb0-20c5-414c-bc86-58461654bcb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457471 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457514 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szj9q\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-kube-api-access-szj9q\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457546 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457575 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-utilities\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457616 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457673 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457734 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8p6v\" (UniqueName: \"kubernetes.io/projected/6666e009-8c33-402c-865e-03e35b98ad97-kube-api-access-s8p6v\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457765 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457797 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457821 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-catalog-content\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457874 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9eb7dcb0-20c5-414c-bc86-58461654bcb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457911 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.457943 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.458353 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.458714 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.458850 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.459007 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.459040 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.459438 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9eb7dcb0-20c5-414c-bc86-58461654bcb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.464004 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9eb7dcb0-20c5-414c-bc86-58461654bcb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.464994 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.466256 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.468775 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9eb7dcb0-20c5-414c-bc86-58461654bcb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.479495 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szj9q\" (UniqueName: \"kubernetes.io/projected/9eb7dcb0-20c5-414c-bc86-58461654bcb5-kube-api-access-szj9q\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.495698 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9eb7dcb0-20c5-414c-bc86-58461654bcb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.559305 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-utilities\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.560117 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8p6v\" (UniqueName: \"kubernetes.io/projected/6666e009-8c33-402c-865e-03e35b98ad97-kube-api-access-s8p6v\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.560015 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-utilities\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.559873 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.560338 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-catalog-content\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.560747 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-catalog-content\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.594873 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8p6v\" (UniqueName: \"kubernetes.io/projected/6666e009-8c33-402c-865e-03e35b98ad97-kube-api-access-s8p6v\") pod \"redhat-operators-kjhlh\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.619797 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.985286 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b84523-522e-4e8c-b78e-0094262fb1f8" path="/var/lib/kubelet/pods/12b84523-522e-4e8c-b78e-0094262fb1f8/volumes" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.986250 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef557d2-b935-4cf6-98f1-d3c2251c0e38" path="/var/lib/kubelet/pods/cef557d2-b935-4cf6-98f1-d3c2251c0e38/volumes" Feb 18 12:00:34 crc kubenswrapper[4922]: I0218 12:00:34.991658 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjhlh"] Feb 18 12:00:35 crc kubenswrapper[4922]: I0218 12:00:35.038154 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:00:35 crc kubenswrapper[4922]: I0218 12:00:35.934522 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-pg7nw"] Feb 18 12:00:35 crc kubenswrapper[4922]: I0218 12:00:35.937503 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:35 crc kubenswrapper[4922]: I0218 12:00:35.943112 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 12:00:35 crc kubenswrapper[4922]: I0218 12:00:35.970452 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-pg7nw"] Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.019226 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-config\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.019275 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z99xf\" (UniqueName: \"kubernetes.io/projected/ec061216-02ec-4395-a5a8-baa7004bf191-kube-api-access-z99xf\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.019344 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-svc\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.019596 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.019672 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.019807 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.019923 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.065573 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eb7dcb0-20c5-414c-bc86-58461654bcb5","Type":"ContainerStarted","Data":"d4ee5258db15b40985f75491134792db54b8f88a99d9dc67132d9916cec20645"} Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.067113 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb934d91-0203-48d1-be6a-ab13e821993d","Type":"ContainerStarted","Data":"601c44d2e7a1e66d83dce04779c8353d850c14d7d1ba8a2cf3bd8ac47fff773a"} Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.069881 4922 generic.go:334] "Generic (PLEG): container finished" podID="6666e009-8c33-402c-865e-03e35b98ad97" containerID="a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4" exitCode=0 Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.069932 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhlh" event={"ID":"6666e009-8c33-402c-865e-03e35b98ad97","Type":"ContainerDied","Data":"a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4"} Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.069971 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhlh" event={"ID":"6666e009-8c33-402c-865e-03e35b98ad97","Type":"ContainerStarted","Data":"16b2380d157280b8df652af0256b934f5510d0494aaa70da277ce7c3af2d5728"} Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.124676 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-svc\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.124809 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.124845 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.124918 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.124970 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.125030 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-config\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.125055 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z99xf\" (UniqueName: \"kubernetes.io/projected/ec061216-02ec-4395-a5a8-baa7004bf191-kube-api-access-z99xf\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.126273 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-svc\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.126808 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.127332 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.127931 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.128566 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-config\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.128696 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.151061 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z99xf\" (UniqueName: \"kubernetes.io/projected/ec061216-02ec-4395-a5a8-baa7004bf191-kube-api-access-z99xf\") pod \"dnsmasq-dns-d558885bc-pg7nw\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.274887 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:36 crc kubenswrapper[4922]: I0218 12:00:36.850164 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-pg7nw"] Feb 18 12:00:37 crc kubenswrapper[4922]: I0218 12:00:37.081466 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" event={"ID":"ec061216-02ec-4395-a5a8-baa7004bf191","Type":"ContainerStarted","Data":"ce44beb1a2faa1a1bce3fbd62fd03f10442fe344f181a8b9740c07dc8a5954e6"} Feb 18 12:00:37 crc kubenswrapper[4922]: I0218 12:00:37.087275 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eb7dcb0-20c5-414c-bc86-58461654bcb5","Type":"ContainerStarted","Data":"0ea30ab9744b418aaa71c6de8970bfdb30e18f4cbcf5605e9ca3cf28ff78e461"} Feb 18 12:00:38 crc kubenswrapper[4922]: I0218 12:00:38.097919 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec061216-02ec-4395-a5a8-baa7004bf191" containerID="a65dc7db9a28e801a77617198d8984945af873b42a9d92e64f4d248230c46bbb" exitCode=0 Feb 18 12:00:38 crc kubenswrapper[4922]: I0218 12:00:38.097998 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" event={"ID":"ec061216-02ec-4395-a5a8-baa7004bf191","Type":"ContainerDied","Data":"a65dc7db9a28e801a77617198d8984945af873b42a9d92e64f4d248230c46bbb"} Feb 18 12:00:38 crc kubenswrapper[4922]: I0218 12:00:38.102060 4922 generic.go:334] "Generic (PLEG): container finished" podID="6666e009-8c33-402c-865e-03e35b98ad97" containerID="95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d" exitCode=0 Feb 18 12:00:38 crc kubenswrapper[4922]: I0218 12:00:38.103117 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhlh" event={"ID":"6666e009-8c33-402c-865e-03e35b98ad97","Type":"ContainerDied","Data":"95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d"} Feb 18 12:00:39 crc kubenswrapper[4922]: I0218 12:00:39.119037 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhlh" event={"ID":"6666e009-8c33-402c-865e-03e35b98ad97","Type":"ContainerStarted","Data":"d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a"} Feb 18 12:00:39 crc kubenswrapper[4922]: I0218 12:00:39.124704 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" event={"ID":"ec061216-02ec-4395-a5a8-baa7004bf191","Type":"ContainerStarted","Data":"d2822d61699885da5388dcd0043c11b3bef2124daed27853cb52ed512f1cbcf2"} Feb 18 12:00:39 crc kubenswrapper[4922]: I0218 12:00:39.124887 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:39 crc kubenswrapper[4922]: I0218 12:00:39.145604 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kjhlh" podStartSLOduration=2.736054368 podStartE2EDuration="5.145586487s" podCreationTimestamp="2026-02-18 12:00:34 +0000 UTC" firstStartedPulling="2026-02-18 12:00:36.071922317 +0000 UTC m=+1437.799626387" lastFinishedPulling="2026-02-18 12:00:38.481454426 +0000 UTC m=+1440.209158506" observedRunningTime="2026-02-18 12:00:39.13623631 +0000 UTC m=+1440.863940420" watchObservedRunningTime="2026-02-18 12:00:39.145586487 +0000 UTC m=+1440.873290567" Feb 18 12:00:39 crc kubenswrapper[4922]: I0218 12:00:39.162068 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" podStartSLOduration=4.162048305 podStartE2EDuration="4.162048305s" podCreationTimestamp="2026-02-18 12:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:00:39.16029629 +0000 UTC m=+1440.888000390" watchObservedRunningTime="2026-02-18 12:00:39.162048305 +0000 UTC m=+1440.889752385" Feb 18 12:00:39 crc kubenswrapper[4922]: I0218 12:00:39.807116 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:00:39 crc kubenswrapper[4922]: I0218 12:00:39.807180 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:00:44 crc kubenswrapper[4922]: I0218 12:00:44.620426 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:44 crc kubenswrapper[4922]: I0218 12:00:44.620994 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:44 crc kubenswrapper[4922]: I0218 12:00:44.666882 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:45 crc kubenswrapper[4922]: I0218 12:00:45.230456 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:45 crc kubenswrapper[4922]: I0218 12:00:45.277402 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjhlh"] Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.276507 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.368991 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c2lrw"] Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.369658 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" podUID="7ec5b650-c58d-4b8b-a903-7b95c211139c" containerName="dnsmasq-dns" containerID="cri-o://e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4" gracePeriod=10 Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.458959 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-zdlvc"] Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.460563 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.482660 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-zdlvc"] Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.543774 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.543828 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.543878 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgmvm\" (UniqueName: \"kubernetes.io/projected/d7048bd5-50d1-472a-a898-6cf57cf126d8-kube-api-access-kgmvm\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.543944 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-config\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.544046 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.544181 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.544243 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.648058 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.648128 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.648319 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.648379 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.648410 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgmvm\" (UniqueName: \"kubernetes.io/projected/d7048bd5-50d1-472a-a898-6cf57cf126d8-kube-api-access-kgmvm\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.648957 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-config\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.649138 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.650668 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.650701 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.651232 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.651489 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.651851 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-config\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.652660 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7048bd5-50d1-472a-a898-6cf57cf126d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.681767 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgmvm\" (UniqueName: \"kubernetes.io/projected/d7048bd5-50d1-472a-a898-6cf57cf126d8-kube-api-access-kgmvm\") pod \"dnsmasq-dns-6b6dc74c5-zdlvc\" (UID: \"d7048bd5-50d1-472a-a898-6cf57cf126d8\") " pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.833247 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:46 crc kubenswrapper[4922]: I0218 12:00:46.960494 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.055732 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-swift-storage-0\") pod \"7ec5b650-c58d-4b8b-a903-7b95c211139c\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.056094 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-svc\") pod \"7ec5b650-c58d-4b8b-a903-7b95c211139c\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.056528 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-nb\") pod \"7ec5b650-c58d-4b8b-a903-7b95c211139c\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.056575 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-config\") pod \"7ec5b650-c58d-4b8b-a903-7b95c211139c\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.056642 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-sb\") pod \"7ec5b650-c58d-4b8b-a903-7b95c211139c\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.056713 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chbg5\" (UniqueName: \"kubernetes.io/projected/7ec5b650-c58d-4b8b-a903-7b95c211139c-kube-api-access-chbg5\") pod \"7ec5b650-c58d-4b8b-a903-7b95c211139c\" (UID: \"7ec5b650-c58d-4b8b-a903-7b95c211139c\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.074039 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec5b650-c58d-4b8b-a903-7b95c211139c-kube-api-access-chbg5" (OuterVolumeSpecName: "kube-api-access-chbg5") pod "7ec5b650-c58d-4b8b-a903-7b95c211139c" (UID: "7ec5b650-c58d-4b8b-a903-7b95c211139c"). InnerVolumeSpecName "kube-api-access-chbg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.119903 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ec5b650-c58d-4b8b-a903-7b95c211139c" (UID: "7ec5b650-c58d-4b8b-a903-7b95c211139c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.133532 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ec5b650-c58d-4b8b-a903-7b95c211139c" (UID: "7ec5b650-c58d-4b8b-a903-7b95c211139c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.140003 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ec5b650-c58d-4b8b-a903-7b95c211139c" (UID: "7ec5b650-c58d-4b8b-a903-7b95c211139c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.161980 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.162226 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.162297 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.162414 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chbg5\" (UniqueName: \"kubernetes.io/projected/7ec5b650-c58d-4b8b-a903-7b95c211139c-kube-api-access-chbg5\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.162906 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-config" (OuterVolumeSpecName: "config") pod "7ec5b650-c58d-4b8b-a903-7b95c211139c" (UID: "7ec5b650-c58d-4b8b-a903-7b95c211139c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.194829 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ec5b650-c58d-4b8b-a903-7b95c211139c" (UID: "7ec5b650-c58d-4b8b-a903-7b95c211139c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.205436 4922 generic.go:334] "Generic (PLEG): container finished" podID="7ec5b650-c58d-4b8b-a903-7b95c211139c" containerID="e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4" exitCode=0 Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.205656 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" event={"ID":"7ec5b650-c58d-4b8b-a903-7b95c211139c","Type":"ContainerDied","Data":"e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4"} Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.205725 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" event={"ID":"7ec5b650-c58d-4b8b-a903-7b95c211139c","Type":"ContainerDied","Data":"ec7c038a1e2201c0d4227eabf40ab3692ebca8c65c8f35a81b082989a7f57de6"} Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.205751 4922 scope.go:117] "RemoveContainer" containerID="e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.205986 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-c2lrw" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.206227 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kjhlh" podUID="6666e009-8c33-402c-865e-03e35b98ad97" containerName="registry-server" containerID="cri-o://d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a" gracePeriod=2 Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.251975 4922 scope.go:117] "RemoveContainer" containerID="002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.268617 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.268662 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ec5b650-c58d-4b8b-a903-7b95c211139c-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.313421 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c2lrw"] Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.321574 4922 scope.go:117] "RemoveContainer" containerID="e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4" Feb 18 12:00:47 crc kubenswrapper[4922]: E0218 12:00:47.327554 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4\": container with ID starting with e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4 not found: ID does not exist" containerID="e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.327621 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4"} err="failed to get container status \"e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4\": rpc error: code = NotFound desc = could not find container \"e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4\": container with ID starting with e9ab7fb9a54d8f679a096c44f20292d1e9d8bf666191a5f0522387545b3f92c4 not found: ID does not exist" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.327657 4922 scope.go:117] "RemoveContainer" containerID="002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a" Feb 18 12:00:47 crc kubenswrapper[4922]: E0218 12:00:47.332285 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a\": container with ID starting with 002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a not found: ID does not exist" containerID="002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.332332 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a"} err="failed to get container status \"002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a\": rpc error: code = NotFound desc = could not find container \"002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a\": container with ID starting with 002e5f15ac919f451d0a14749c9df4d942fa8a8527a3d44eb7afde0e3a5e390a not found: ID does not exist" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.341554 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-c2lrw"] Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.369136 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-zdlvc"] Feb 18 12:00:47 crc kubenswrapper[4922]: W0218 12:00:47.424696 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7048bd5_50d1_472a_a898_6cf57cf126d8.slice/crio-e533c7e82990e2ba6c93fbb180353bbfe4e66b977189b12e497649fc184f5b6a WatchSource:0}: Error finding container e533c7e82990e2ba6c93fbb180353bbfe4e66b977189b12e497649fc184f5b6a: Status 404 returned error can't find the container with id e533c7e82990e2ba6c93fbb180353bbfe4e66b977189b12e497649fc184f5b6a Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.859652 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.984049 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-utilities\") pod \"6666e009-8c33-402c-865e-03e35b98ad97\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.984392 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-catalog-content\") pod \"6666e009-8c33-402c-865e-03e35b98ad97\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.984464 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8p6v\" (UniqueName: \"kubernetes.io/projected/6666e009-8c33-402c-865e-03e35b98ad97-kube-api-access-s8p6v\") pod \"6666e009-8c33-402c-865e-03e35b98ad97\" (UID: \"6666e009-8c33-402c-865e-03e35b98ad97\") " Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.985071 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-utilities" (OuterVolumeSpecName: "utilities") pod "6666e009-8c33-402c-865e-03e35b98ad97" (UID: "6666e009-8c33-402c-865e-03e35b98ad97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:00:47 crc kubenswrapper[4922]: I0218 12:00:47.989324 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6666e009-8c33-402c-865e-03e35b98ad97-kube-api-access-s8p6v" (OuterVolumeSpecName: "kube-api-access-s8p6v") pod "6666e009-8c33-402c-865e-03e35b98ad97" (UID: "6666e009-8c33-402c-865e-03e35b98ad97"). InnerVolumeSpecName "kube-api-access-s8p6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.086435 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.086471 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8p6v\" (UniqueName: \"kubernetes.io/projected/6666e009-8c33-402c-865e-03e35b98ad97-kube-api-access-s8p6v\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.096259 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6666e009-8c33-402c-865e-03e35b98ad97" (UID: "6666e009-8c33-402c-865e-03e35b98ad97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.188616 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6666e009-8c33-402c-865e-03e35b98ad97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.217391 4922 generic.go:334] "Generic (PLEG): container finished" podID="6666e009-8c33-402c-865e-03e35b98ad97" containerID="d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a" exitCode=0 Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.217438 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhlh" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.217455 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhlh" event={"ID":"6666e009-8c33-402c-865e-03e35b98ad97","Type":"ContainerDied","Data":"d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a"} Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.217479 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhlh" event={"ID":"6666e009-8c33-402c-865e-03e35b98ad97","Type":"ContainerDied","Data":"16b2380d157280b8df652af0256b934f5510d0494aaa70da277ce7c3af2d5728"} Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.217508 4922 scope.go:117] "RemoveContainer" containerID="d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.219879 4922 generic.go:334] "Generic (PLEG): container finished" podID="d7048bd5-50d1-472a-a898-6cf57cf126d8" containerID="f760aef3dd84c8a9b4a1d6590daf508f28b329c6e8c215c202f916bdcf8ff7f3" exitCode=0 Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.219946 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" event={"ID":"d7048bd5-50d1-472a-a898-6cf57cf126d8","Type":"ContainerDied","Data":"f760aef3dd84c8a9b4a1d6590daf508f28b329c6e8c215c202f916bdcf8ff7f3"} Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.219975 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" event={"ID":"d7048bd5-50d1-472a-a898-6cf57cf126d8","Type":"ContainerStarted","Data":"e533c7e82990e2ba6c93fbb180353bbfe4e66b977189b12e497649fc184f5b6a"} Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.269270 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjhlh"] Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.269953 4922 scope.go:117] "RemoveContainer" containerID="95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.279577 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kjhlh"] Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.293142 4922 scope.go:117] "RemoveContainer" containerID="a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.319336 4922 scope.go:117] "RemoveContainer" containerID="d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a" Feb 18 12:00:48 crc kubenswrapper[4922]: E0218 12:00:48.319721 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a\": container with ID starting with d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a not found: ID does not exist" containerID="d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.319746 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a"} err="failed to get container status \"d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a\": rpc error: code = NotFound desc = could not find container \"d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a\": container with ID starting with d89593d4fb50825f53b58c9f55d45d23ff0b9a1968fcffa7473579b979ff011a not found: ID does not exist" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.319767 4922 scope.go:117] "RemoveContainer" containerID="95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d" Feb 18 12:00:48 crc kubenswrapper[4922]: E0218 12:00:48.320005 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d\": container with ID starting with 95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d not found: ID does not exist" containerID="95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.320023 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d"} err="failed to get container status \"95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d\": rpc error: code = NotFound desc = could not find container \"95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d\": container with ID starting with 95962c83417849aecd0d2c14af67a541fb2f9eb477c7d2dac43a445ab6b17a5d not found: ID does not exist" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.320035 4922 scope.go:117] "RemoveContainer" containerID="a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4" Feb 18 12:00:48 crc kubenswrapper[4922]: E0218 12:00:48.320254 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4\": container with ID starting with a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4 not found: ID does not exist" containerID="a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.320271 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4"} err="failed to get container status \"a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4\": rpc error: code = NotFound desc = could not find container \"a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4\": container with ID starting with a7ee766db58b3d5721673fe438cd3aa1ba8418159b881cea3cf559cebc804ed4 not found: ID does not exist" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.998116 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6666e009-8c33-402c-865e-03e35b98ad97" path="/var/lib/kubelet/pods/6666e009-8c33-402c-865e-03e35b98ad97/volumes" Feb 18 12:00:48 crc kubenswrapper[4922]: I0218 12:00:48.999335 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec5b650-c58d-4b8b-a903-7b95c211139c" path="/var/lib/kubelet/pods/7ec5b650-c58d-4b8b-a903-7b95c211139c/volumes" Feb 18 12:00:49 crc kubenswrapper[4922]: I0218 12:00:49.237716 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" event={"ID":"d7048bd5-50d1-472a-a898-6cf57cf126d8","Type":"ContainerStarted","Data":"313947e563aa52919e647893fe1e854174e8a8d9215bc035726940710068797a"} Feb 18 12:00:49 crc kubenswrapper[4922]: I0218 12:00:49.238002 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:49 crc kubenswrapper[4922]: I0218 12:00:49.257595 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" podStartSLOduration=3.257576363 podStartE2EDuration="3.257576363s" podCreationTimestamp="2026-02-18 12:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:00:49.254268489 +0000 UTC m=+1450.981972569" watchObservedRunningTime="2026-02-18 12:00:49.257576363 +0000 UTC m=+1450.985280443" Feb 18 12:00:56 crc kubenswrapper[4922]: I0218 12:00:56.835483 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6dc74c5-zdlvc" Feb 18 12:00:56 crc kubenswrapper[4922]: I0218 12:00:56.941160 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-pg7nw"] Feb 18 12:00:56 crc kubenswrapper[4922]: I0218 12:00:56.948512 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" podUID="ec061216-02ec-4395-a5a8-baa7004bf191" containerName="dnsmasq-dns" containerID="cri-o://d2822d61699885da5388dcd0043c11b3bef2124daed27853cb52ed512f1cbcf2" gracePeriod=10 Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.340973 4922 generic.go:334] "Generic (PLEG): container finished" podID="ec061216-02ec-4395-a5a8-baa7004bf191" containerID="d2822d61699885da5388dcd0043c11b3bef2124daed27853cb52ed512f1cbcf2" exitCode=0 Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.341044 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" event={"ID":"ec061216-02ec-4395-a5a8-baa7004bf191","Type":"ContainerDied","Data":"d2822d61699885da5388dcd0043c11b3bef2124daed27853cb52ed512f1cbcf2"} Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.450634 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.636028 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-openstack-edpm-ipam\") pod \"ec061216-02ec-4395-a5a8-baa7004bf191\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.636107 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-sb\") pod \"ec061216-02ec-4395-a5a8-baa7004bf191\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.636164 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z99xf\" (UniqueName: \"kubernetes.io/projected/ec061216-02ec-4395-a5a8-baa7004bf191-kube-api-access-z99xf\") pod \"ec061216-02ec-4395-a5a8-baa7004bf191\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.636203 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-config\") pod \"ec061216-02ec-4395-a5a8-baa7004bf191\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.636230 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-swift-storage-0\") pod \"ec061216-02ec-4395-a5a8-baa7004bf191\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.637810 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-svc\") pod \"ec061216-02ec-4395-a5a8-baa7004bf191\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.638137 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-nb\") pod \"ec061216-02ec-4395-a5a8-baa7004bf191\" (UID: \"ec061216-02ec-4395-a5a8-baa7004bf191\") " Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.642967 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec061216-02ec-4395-a5a8-baa7004bf191-kube-api-access-z99xf" (OuterVolumeSpecName: "kube-api-access-z99xf") pod "ec061216-02ec-4395-a5a8-baa7004bf191" (UID: "ec061216-02ec-4395-a5a8-baa7004bf191"). InnerVolumeSpecName "kube-api-access-z99xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.699884 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec061216-02ec-4395-a5a8-baa7004bf191" (UID: "ec061216-02ec-4395-a5a8-baa7004bf191"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.699892 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec061216-02ec-4395-a5a8-baa7004bf191" (UID: "ec061216-02ec-4395-a5a8-baa7004bf191"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.710138 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec061216-02ec-4395-a5a8-baa7004bf191" (UID: "ec061216-02ec-4395-a5a8-baa7004bf191"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.713354 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ec061216-02ec-4395-a5a8-baa7004bf191" (UID: "ec061216-02ec-4395-a5a8-baa7004bf191"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.716663 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec061216-02ec-4395-a5a8-baa7004bf191" (UID: "ec061216-02ec-4395-a5a8-baa7004bf191"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.722001 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-config" (OuterVolumeSpecName: "config") pod "ec061216-02ec-4395-a5a8-baa7004bf191" (UID: "ec061216-02ec-4395-a5a8-baa7004bf191"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.740981 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.741015 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.741027 4922 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.741037 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z99xf\" (UniqueName: \"kubernetes.io/projected/ec061216-02ec-4395-a5a8-baa7004bf191-kube-api-access-z99xf\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.741063 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.741074 4922 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:57 crc kubenswrapper[4922]: I0218 12:00:57.741083 4922 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec061216-02ec-4395-a5a8-baa7004bf191-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:58 crc kubenswrapper[4922]: I0218 12:00:58.352541 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" event={"ID":"ec061216-02ec-4395-a5a8-baa7004bf191","Type":"ContainerDied","Data":"ce44beb1a2faa1a1bce3fbd62fd03f10442fe344f181a8b9740c07dc8a5954e6"} Feb 18 12:00:58 crc kubenswrapper[4922]: I0218 12:00:58.352605 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-pg7nw" Feb 18 12:00:58 crc kubenswrapper[4922]: I0218 12:00:58.352612 4922 scope.go:117] "RemoveContainer" containerID="d2822d61699885da5388dcd0043c11b3bef2124daed27853cb52ed512f1cbcf2" Feb 18 12:00:58 crc kubenswrapper[4922]: I0218 12:00:58.382765 4922 scope.go:117] "RemoveContainer" containerID="a65dc7db9a28e801a77617198d8984945af873b42a9d92e64f4d248230c46bbb" Feb 18 12:00:58 crc kubenswrapper[4922]: I0218 12:00:58.399122 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-pg7nw"] Feb 18 12:00:58 crc kubenswrapper[4922]: I0218 12:00:58.407592 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-pg7nw"] Feb 18 12:00:58 crc kubenswrapper[4922]: I0218 12:00:58.992471 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec061216-02ec-4395-a5a8-baa7004bf191" path="/var/lib/kubelet/pods/ec061216-02ec-4395-a5a8-baa7004bf191/volumes" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.150987 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523601-t5w2s"] Feb 18 12:01:00 crc kubenswrapper[4922]: E0218 12:01:00.151706 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec5b650-c58d-4b8b-a903-7b95c211139c" containerName="dnsmasq-dns" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.151724 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec5b650-c58d-4b8b-a903-7b95c211139c" containerName="dnsmasq-dns" Feb 18 12:01:00 crc kubenswrapper[4922]: E0218 12:01:00.151737 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6666e009-8c33-402c-865e-03e35b98ad97" containerName="extract-content" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.151744 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6666e009-8c33-402c-865e-03e35b98ad97" containerName="extract-content" Feb 18 12:01:00 crc kubenswrapper[4922]: E0218 12:01:00.151771 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6666e009-8c33-402c-865e-03e35b98ad97" containerName="registry-server" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.151778 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6666e009-8c33-402c-865e-03e35b98ad97" containerName="registry-server" Feb 18 12:01:00 crc kubenswrapper[4922]: E0218 12:01:00.151793 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec5b650-c58d-4b8b-a903-7b95c211139c" containerName="init" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.151800 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec5b650-c58d-4b8b-a903-7b95c211139c" containerName="init" Feb 18 12:01:00 crc kubenswrapper[4922]: E0218 12:01:00.151815 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6666e009-8c33-402c-865e-03e35b98ad97" containerName="extract-utilities" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.151822 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6666e009-8c33-402c-865e-03e35b98ad97" containerName="extract-utilities" Feb 18 12:01:00 crc kubenswrapper[4922]: E0218 12:01:00.151835 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec061216-02ec-4395-a5a8-baa7004bf191" containerName="dnsmasq-dns" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.151842 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec061216-02ec-4395-a5a8-baa7004bf191" containerName="dnsmasq-dns" Feb 18 12:01:00 crc kubenswrapper[4922]: E0218 12:01:00.151857 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec061216-02ec-4395-a5a8-baa7004bf191" containerName="init" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.151868 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec061216-02ec-4395-a5a8-baa7004bf191" containerName="init" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.152078 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec061216-02ec-4395-a5a8-baa7004bf191" containerName="dnsmasq-dns" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.152101 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec5b650-c58d-4b8b-a903-7b95c211139c" containerName="dnsmasq-dns" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.152116 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6666e009-8c33-402c-865e-03e35b98ad97" containerName="registry-server" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.152941 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.164259 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523601-t5w2s"] Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.188003 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-config-data\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.188047 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glwk2\" (UniqueName: \"kubernetes.io/projected/07d51aec-efff-44ea-b9c5-c5335f63e0f2-kube-api-access-glwk2\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.188126 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-combined-ca-bundle\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.188312 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-fernet-keys\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.289642 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-combined-ca-bundle\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.289866 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-fernet-keys\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.289993 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-config-data\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.290081 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glwk2\" (UniqueName: \"kubernetes.io/projected/07d51aec-efff-44ea-b9c5-c5335f63e0f2-kube-api-access-glwk2\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.299194 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-config-data\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.299446 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-fernet-keys\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.305185 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-combined-ca-bundle\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.318422 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glwk2\" (UniqueName: \"kubernetes.io/projected/07d51aec-efff-44ea-b9c5-c5335f63e0f2-kube-api-access-glwk2\") pod \"keystone-cron-29523601-t5w2s\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.469027 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:00 crc kubenswrapper[4922]: I0218 12:01:00.994577 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523601-t5w2s"] Feb 18 12:01:01 crc kubenswrapper[4922]: I0218 12:01:01.386415 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523601-t5w2s" event={"ID":"07d51aec-efff-44ea-b9c5-c5335f63e0f2","Type":"ContainerStarted","Data":"d727a737a65ba6105cfd3ad5cbfbf25f5c64e5110bfdff32ea4e8f4470ef37ca"} Feb 18 12:01:01 crc kubenswrapper[4922]: I0218 12:01:01.386496 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523601-t5w2s" event={"ID":"07d51aec-efff-44ea-b9c5-c5335f63e0f2","Type":"ContainerStarted","Data":"52162534e4622938a1f196f2b5aacb6d07a616c817761fe3e3576c455d12e223"} Feb 18 12:01:01 crc kubenswrapper[4922]: I0218 12:01:01.410295 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523601-t5w2s" podStartSLOduration=1.410273576 podStartE2EDuration="1.410273576s" podCreationTimestamp="2026-02-18 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:01:01.405534375 +0000 UTC m=+1463.133238455" watchObservedRunningTime="2026-02-18 12:01:01.410273576 +0000 UTC m=+1463.137977656" Feb 18 12:01:03 crc kubenswrapper[4922]: I0218 12:01:03.405518 4922 generic.go:334] "Generic (PLEG): container finished" podID="07d51aec-efff-44ea-b9c5-c5335f63e0f2" containerID="d727a737a65ba6105cfd3ad5cbfbf25f5c64e5110bfdff32ea4e8f4470ef37ca" exitCode=0 Feb 18 12:01:03 crc kubenswrapper[4922]: I0218 12:01:03.405552 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523601-t5w2s" event={"ID":"07d51aec-efff-44ea-b9c5-c5335f63e0f2","Type":"ContainerDied","Data":"d727a737a65ba6105cfd3ad5cbfbf25f5c64e5110bfdff32ea4e8f4470ef37ca"} Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.837793 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.879208 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-config-data\") pod \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.879316 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glwk2\" (UniqueName: \"kubernetes.io/projected/07d51aec-efff-44ea-b9c5-c5335f63e0f2-kube-api-access-glwk2\") pod \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.879344 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-combined-ca-bundle\") pod \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.879378 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-fernet-keys\") pod \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\" (UID: \"07d51aec-efff-44ea-b9c5-c5335f63e0f2\") " Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.887225 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "07d51aec-efff-44ea-b9c5-c5335f63e0f2" (UID: "07d51aec-efff-44ea-b9c5-c5335f63e0f2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.910950 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d51aec-efff-44ea-b9c5-c5335f63e0f2-kube-api-access-glwk2" (OuterVolumeSpecName: "kube-api-access-glwk2") pod "07d51aec-efff-44ea-b9c5-c5335f63e0f2" (UID: "07d51aec-efff-44ea-b9c5-c5335f63e0f2"). InnerVolumeSpecName "kube-api-access-glwk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.918370 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07d51aec-efff-44ea-b9c5-c5335f63e0f2" (UID: "07d51aec-efff-44ea-b9c5-c5335f63e0f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:04 crc kubenswrapper[4922]: I0218 12:01:04.942689 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-config-data" (OuterVolumeSpecName: "config-data") pod "07d51aec-efff-44ea-b9c5-c5335f63e0f2" (UID: "07d51aec-efff-44ea-b9c5-c5335f63e0f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:05 crc kubenswrapper[4922]: I0218 12:01:05.011734 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:05 crc kubenswrapper[4922]: I0218 12:01:05.012072 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glwk2\" (UniqueName: \"kubernetes.io/projected/07d51aec-efff-44ea-b9c5-c5335f63e0f2-kube-api-access-glwk2\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:05 crc kubenswrapper[4922]: I0218 12:01:05.012100 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:05 crc kubenswrapper[4922]: I0218 12:01:05.012146 4922 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07d51aec-efff-44ea-b9c5-c5335f63e0f2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:05 crc kubenswrapper[4922]: I0218 12:01:05.428213 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523601-t5w2s" event={"ID":"07d51aec-efff-44ea-b9c5-c5335f63e0f2","Type":"ContainerDied","Data":"52162534e4622938a1f196f2b5aacb6d07a616c817761fe3e3576c455d12e223"} Feb 18 12:01:05 crc kubenswrapper[4922]: I0218 12:01:05.428295 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52162534e4622938a1f196f2b5aacb6d07a616c817761fe3e3576c455d12e223" Feb 18 12:01:05 crc kubenswrapper[4922]: I0218 12:01:05.428665 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523601-t5w2s" Feb 18 12:01:08 crc kubenswrapper[4922]: I0218 12:01:08.462504 4922 generic.go:334] "Generic (PLEG): container finished" podID="bb934d91-0203-48d1-be6a-ab13e821993d" containerID="601c44d2e7a1e66d83dce04779c8353d850c14d7d1ba8a2cf3bd8ac47fff773a" exitCode=0 Feb 18 12:01:08 crc kubenswrapper[4922]: I0218 12:01:08.462594 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb934d91-0203-48d1-be6a-ab13e821993d","Type":"ContainerDied","Data":"601c44d2e7a1e66d83dce04779c8353d850c14d7d1ba8a2cf3bd8ac47fff773a"} Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.281115 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx"] Feb 18 12:01:09 crc kubenswrapper[4922]: E0218 12:01:09.282992 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07d51aec-efff-44ea-b9c5-c5335f63e0f2" containerName="keystone-cron" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.283021 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="07d51aec-efff-44ea-b9c5-c5335f63e0f2" containerName="keystone-cron" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.283250 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="07d51aec-efff-44ea-b9c5-c5335f63e0f2" containerName="keystone-cron" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.284162 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.286471 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.286683 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.286600 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.286990 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.296672 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx"] Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.399146 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.399448 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.399577 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zc6j\" (UniqueName: \"kubernetes.io/projected/30aa9b56-28ab-4d32-beb5-965876a6e243-kube-api-access-7zc6j\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.399693 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.473213 4922 generic.go:334] "Generic (PLEG): container finished" podID="9eb7dcb0-20c5-414c-bc86-58461654bcb5" containerID="0ea30ab9744b418aaa71c6de8970bfdb30e18f4cbcf5605e9ca3cf28ff78e461" exitCode=0 Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.473274 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eb7dcb0-20c5-414c-bc86-58461654bcb5","Type":"ContainerDied","Data":"0ea30ab9744b418aaa71c6de8970bfdb30e18f4cbcf5605e9ca3cf28ff78e461"} Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.477901 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb934d91-0203-48d1-be6a-ab13e821993d","Type":"ContainerStarted","Data":"9f81234cb9ec53a5cd936915d2bb1d6d2143b00ffa178762474b013411e95937"} Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.478822 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.493578 4922 scope.go:117] "RemoveContainer" containerID="5176cb9980de6bcd0a67b80f4ff01a72286ab10295b4c3d177fe01a39914f0b0" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.501466 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.501590 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.501652 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zc6j\" (UniqueName: \"kubernetes.io/projected/30aa9b56-28ab-4d32-beb5-965876a6e243-kube-api-access-7zc6j\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.501690 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.505805 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.509744 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.511438 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.526079 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zc6j\" (UniqueName: \"kubernetes.io/projected/30aa9b56-28ab-4d32-beb5-965876a6e243-kube-api-access-7zc6j\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.552657 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.552634236 podStartE2EDuration="36.552634236s" podCreationTimestamp="2026-02-18 12:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:01:09.537496713 +0000 UTC m=+1471.265200783" watchObservedRunningTime="2026-02-18 12:01:09.552634236 +0000 UTC m=+1471.280338316" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.601124 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.744694 4922 scope.go:117] "RemoveContainer" containerID="50e02399793b7c21fb8b885dfb76c0b8a822098669151d568f198c715c6c35d1" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.807074 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.807132 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.807181 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.808009 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef7998df4ff2ac956dafb01bf87962d308e4ed2ee7ceb57165a1d59bde7c799c"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:01:09 crc kubenswrapper[4922]: I0218 12:01:09.808087 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://ef7998df4ff2ac956dafb01bf87962d308e4ed2ee7ceb57165a1d59bde7c799c" gracePeriod=600 Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.267724 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx"] Feb 18 12:01:10 crc kubenswrapper[4922]: W0218 12:01:10.269891 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30aa9b56_28ab_4d32_beb5_965876a6e243.slice/crio-a5c218233f0436075dc0d643615f92c4d2ae717eff4d5e1d3a08f32bad9362e7 WatchSource:0}: Error finding container a5c218233f0436075dc0d643615f92c4d2ae717eff4d5e1d3a08f32bad9362e7: Status 404 returned error can't find the container with id a5c218233f0436075dc0d643615f92c4d2ae717eff4d5e1d3a08f32bad9362e7 Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.506138 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" event={"ID":"30aa9b56-28ab-4d32-beb5-965876a6e243","Type":"ContainerStarted","Data":"a5c218233f0436075dc0d643615f92c4d2ae717eff4d5e1d3a08f32bad9362e7"} Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.510258 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="ef7998df4ff2ac956dafb01bf87962d308e4ed2ee7ceb57165a1d59bde7c799c" exitCode=0 Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.510443 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"ef7998df4ff2ac956dafb01bf87962d308e4ed2ee7ceb57165a1d59bde7c799c"} Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.510728 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df"} Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.510803 4922 scope.go:117] "RemoveContainer" containerID="6ab0358a6b4b84604aa8265da97113127295fc06806ed22c39a69885110c93fc" Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.517879 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9eb7dcb0-20c5-414c-bc86-58461654bcb5","Type":"ContainerStarted","Data":"9d05dc04734487fb2ac1de6c79ab5bd89a5821495af0603999521dbbacdc9b6a"} Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.518642 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:01:10 crc kubenswrapper[4922]: I0218 12:01:10.568379 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.568339523 podStartE2EDuration="36.568339523s" podCreationTimestamp="2026-02-18 12:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:01:10.56269873 +0000 UTC m=+1472.290402840" watchObservedRunningTime="2026-02-18 12:01:10.568339523 +0000 UTC m=+1472.296043613" Feb 18 12:01:21 crc kubenswrapper[4922]: I0218 12:01:21.655234 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" event={"ID":"30aa9b56-28ab-4d32-beb5-965876a6e243","Type":"ContainerStarted","Data":"e9389767f4da8a0c337e9f914b300c43c380f37d7c3b13c7680f21f3537ce2e0"} Feb 18 12:01:21 crc kubenswrapper[4922]: I0218 12:01:21.677663 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" podStartSLOduration=1.8241466480000001 podStartE2EDuration="12.677645987s" podCreationTimestamp="2026-02-18 12:01:09 +0000 UTC" firstStartedPulling="2026-02-18 12:01:10.272234554 +0000 UTC m=+1471.999938634" lastFinishedPulling="2026-02-18 12:01:21.125733893 +0000 UTC m=+1482.853437973" observedRunningTime="2026-02-18 12:01:21.671764998 +0000 UTC m=+1483.399469088" watchObservedRunningTime="2026-02-18 12:01:21.677645987 +0000 UTC m=+1483.405350067" Feb 18 12:01:23 crc kubenswrapper[4922]: I0218 12:01:23.536715 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 12:01:24 crc kubenswrapper[4922]: I0218 12:01:24.563186 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:01:31 crc kubenswrapper[4922]: I0218 12:01:31.744536 4922 generic.go:334] "Generic (PLEG): container finished" podID="30aa9b56-28ab-4d32-beb5-965876a6e243" containerID="e9389767f4da8a0c337e9f914b300c43c380f37d7c3b13c7680f21f3537ce2e0" exitCode=0 Feb 18 12:01:31 crc kubenswrapper[4922]: I0218 12:01:31.744626 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" event={"ID":"30aa9b56-28ab-4d32-beb5-965876a6e243","Type":"ContainerDied","Data":"e9389767f4da8a0c337e9f914b300c43c380f37d7c3b13c7680f21f3537ce2e0"} Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.175244 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.321865 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-inventory\") pod \"30aa9b56-28ab-4d32-beb5-965876a6e243\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.321945 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-repo-setup-combined-ca-bundle\") pod \"30aa9b56-28ab-4d32-beb5-965876a6e243\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.322254 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zc6j\" (UniqueName: \"kubernetes.io/projected/30aa9b56-28ab-4d32-beb5-965876a6e243-kube-api-access-7zc6j\") pod \"30aa9b56-28ab-4d32-beb5-965876a6e243\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.322300 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-ssh-key-openstack-edpm-ipam\") pod \"30aa9b56-28ab-4d32-beb5-965876a6e243\" (UID: \"30aa9b56-28ab-4d32-beb5-965876a6e243\") " Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.333114 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "30aa9b56-28ab-4d32-beb5-965876a6e243" (UID: "30aa9b56-28ab-4d32-beb5-965876a6e243"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.333122 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30aa9b56-28ab-4d32-beb5-965876a6e243-kube-api-access-7zc6j" (OuterVolumeSpecName: "kube-api-access-7zc6j") pod "30aa9b56-28ab-4d32-beb5-965876a6e243" (UID: "30aa9b56-28ab-4d32-beb5-965876a6e243"). InnerVolumeSpecName "kube-api-access-7zc6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.367963 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "30aa9b56-28ab-4d32-beb5-965876a6e243" (UID: "30aa9b56-28ab-4d32-beb5-965876a6e243"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.382302 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-inventory" (OuterVolumeSpecName: "inventory") pod "30aa9b56-28ab-4d32-beb5-965876a6e243" (UID: "30aa9b56-28ab-4d32-beb5-965876a6e243"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.424847 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zc6j\" (UniqueName: \"kubernetes.io/projected/30aa9b56-28ab-4d32-beb5-965876a6e243-kube-api-access-7zc6j\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.424912 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.424939 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.424963 4922 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30aa9b56-28ab-4d32-beb5-965876a6e243-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.765952 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" event={"ID":"30aa9b56-28ab-4d32-beb5-965876a6e243","Type":"ContainerDied","Data":"a5c218233f0436075dc0d643615f92c4d2ae717eff4d5e1d3a08f32bad9362e7"} Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.766014 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5c218233f0436075dc0d643615f92c4d2ae717eff4d5e1d3a08f32bad9362e7" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.766128 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.855770 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz"] Feb 18 12:01:33 crc kubenswrapper[4922]: E0218 12:01:33.856491 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30aa9b56-28ab-4d32-beb5-965876a6e243" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.856585 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="30aa9b56-28ab-4d32-beb5-965876a6e243" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.856977 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="30aa9b56-28ab-4d32-beb5-965876a6e243" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.858032 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.860404 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.860448 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.860699 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.861291 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:01:33 crc kubenswrapper[4922]: I0218 12:01:33.869066 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz"] Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.037054 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.037128 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96v5\" (UniqueName: \"kubernetes.io/projected/08ba745d-df3b-42c0-a384-ca64c96dd47f-kube-api-access-q96v5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.037153 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.138549 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.138941 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q96v5\" (UniqueName: \"kubernetes.io/projected/08ba745d-df3b-42c0-a384-ca64c96dd47f-kube-api-access-q96v5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.138979 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.143206 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.147456 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.156303 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q96v5\" (UniqueName: \"kubernetes.io/projected/08ba745d-df3b-42c0-a384-ca64c96dd47f-kube-api-access-q96v5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zs9qz\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.185675 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.663525 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz"] Feb 18 12:01:34 crc kubenswrapper[4922]: I0218 12:01:34.775590 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" event={"ID":"08ba745d-df3b-42c0-a384-ca64c96dd47f","Type":"ContainerStarted","Data":"e05f61cfdec2576ef26cb87c3b27fc9b04217963669c18e44f62bb3cfcd46f2c"} Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.143290 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s6f4m"] Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.146146 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.153721 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6f4m"] Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.260260 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-catalog-content\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.260638 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lhpc\" (UniqueName: \"kubernetes.io/projected/05c02623-ca55-4852-ac9f-1415e7d3abad-kube-api-access-5lhpc\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.260679 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-utilities\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.362115 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-catalog-content\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.362390 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lhpc\" (UniqueName: \"kubernetes.io/projected/05c02623-ca55-4852-ac9f-1415e7d3abad-kube-api-access-5lhpc\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.362477 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-utilities\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.362636 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-catalog-content\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.363078 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-utilities\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.382483 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lhpc\" (UniqueName: \"kubernetes.io/projected/05c02623-ca55-4852-ac9f-1415e7d3abad-kube-api-access-5lhpc\") pod \"certified-operators-s6f4m\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.489113 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.795295 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" event={"ID":"08ba745d-df3b-42c0-a384-ca64c96dd47f","Type":"ContainerStarted","Data":"7c1f4358428a0cbbbae7955082ba8a010faf4ce840919e0b17a376820b5b9299"} Feb 18 12:01:35 crc kubenswrapper[4922]: I0218 12:01:35.825749 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" podStartSLOduration=2.291421275 podStartE2EDuration="2.825723274s" podCreationTimestamp="2026-02-18 12:01:33 +0000 UTC" firstStartedPulling="2026-02-18 12:01:34.671254164 +0000 UTC m=+1496.398958254" lastFinishedPulling="2026-02-18 12:01:35.205556173 +0000 UTC m=+1496.933260253" observedRunningTime="2026-02-18 12:01:35.812350666 +0000 UTC m=+1497.540054746" watchObservedRunningTime="2026-02-18 12:01:35.825723274 +0000 UTC m=+1497.553427354" Feb 18 12:01:36 crc kubenswrapper[4922]: I0218 12:01:36.047645 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6f4m"] Feb 18 12:01:36 crc kubenswrapper[4922]: I0218 12:01:36.808596 4922 generic.go:334] "Generic (PLEG): container finished" podID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerID="5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89" exitCode=0 Feb 18 12:01:36 crc kubenswrapper[4922]: I0218 12:01:36.808653 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6f4m" event={"ID":"05c02623-ca55-4852-ac9f-1415e7d3abad","Type":"ContainerDied","Data":"5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89"} Feb 18 12:01:36 crc kubenswrapper[4922]: I0218 12:01:36.808990 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6f4m" event={"ID":"05c02623-ca55-4852-ac9f-1415e7d3abad","Type":"ContainerStarted","Data":"db4c510ec11e469c18d71a0c6c6742633749b823ee5c8d01a1d647d346beade0"} Feb 18 12:01:37 crc kubenswrapper[4922]: I0218 12:01:37.821552 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6f4m" event={"ID":"05c02623-ca55-4852-ac9f-1415e7d3abad","Type":"ContainerStarted","Data":"b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec"} Feb 18 12:01:38 crc kubenswrapper[4922]: I0218 12:01:38.833896 4922 generic.go:334] "Generic (PLEG): container finished" podID="08ba745d-df3b-42c0-a384-ca64c96dd47f" containerID="7c1f4358428a0cbbbae7955082ba8a010faf4ce840919e0b17a376820b5b9299" exitCode=0 Feb 18 12:01:38 crc kubenswrapper[4922]: I0218 12:01:38.833995 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" event={"ID":"08ba745d-df3b-42c0-a384-ca64c96dd47f","Type":"ContainerDied","Data":"7c1f4358428a0cbbbae7955082ba8a010faf4ce840919e0b17a376820b5b9299"} Feb 18 12:01:38 crc kubenswrapper[4922]: I0218 12:01:38.836547 4922 generic.go:334] "Generic (PLEG): container finished" podID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerID="b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec" exitCode=0 Feb 18 12:01:38 crc kubenswrapper[4922]: I0218 12:01:38.836579 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6f4m" event={"ID":"05c02623-ca55-4852-ac9f-1415e7d3abad","Type":"ContainerDied","Data":"b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec"} Feb 18 12:01:39 crc kubenswrapper[4922]: I0218 12:01:39.847262 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6f4m" event={"ID":"05c02623-ca55-4852-ac9f-1415e7d3abad","Type":"ContainerStarted","Data":"a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a"} Feb 18 12:01:39 crc kubenswrapper[4922]: I0218 12:01:39.885497 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s6f4m" podStartSLOduration=2.440724638 podStartE2EDuration="4.88547186s" podCreationTimestamp="2026-02-18 12:01:35 +0000 UTC" firstStartedPulling="2026-02-18 12:01:36.811329633 +0000 UTC m=+1498.539033713" lastFinishedPulling="2026-02-18 12:01:39.256076855 +0000 UTC m=+1500.983780935" observedRunningTime="2026-02-18 12:01:39.874212935 +0000 UTC m=+1501.601917025" watchObservedRunningTime="2026-02-18 12:01:39.88547186 +0000 UTC m=+1501.613175940" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.270923 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.472936 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-ssh-key-openstack-edpm-ipam\") pod \"08ba745d-df3b-42c0-a384-ca64c96dd47f\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.473137 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-inventory\") pod \"08ba745d-df3b-42c0-a384-ca64c96dd47f\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.473309 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q96v5\" (UniqueName: \"kubernetes.io/projected/08ba745d-df3b-42c0-a384-ca64c96dd47f-kube-api-access-q96v5\") pod \"08ba745d-df3b-42c0-a384-ca64c96dd47f\" (UID: \"08ba745d-df3b-42c0-a384-ca64c96dd47f\") " Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.480453 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ba745d-df3b-42c0-a384-ca64c96dd47f-kube-api-access-q96v5" (OuterVolumeSpecName: "kube-api-access-q96v5") pod "08ba745d-df3b-42c0-a384-ca64c96dd47f" (UID: "08ba745d-df3b-42c0-a384-ca64c96dd47f"). InnerVolumeSpecName "kube-api-access-q96v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.518469 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "08ba745d-df3b-42c0-a384-ca64c96dd47f" (UID: "08ba745d-df3b-42c0-a384-ca64c96dd47f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.547503 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-inventory" (OuterVolumeSpecName: "inventory") pod "08ba745d-df3b-42c0-a384-ca64c96dd47f" (UID: "08ba745d-df3b-42c0-a384-ca64c96dd47f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.576104 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q96v5\" (UniqueName: \"kubernetes.io/projected/08ba745d-df3b-42c0-a384-ca64c96dd47f-kube-api-access-q96v5\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.576153 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.576196 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08ba745d-df3b-42c0-a384-ca64c96dd47f-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.860148 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.861811 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zs9qz" event={"ID":"08ba745d-df3b-42c0-a384-ca64c96dd47f","Type":"ContainerDied","Data":"e05f61cfdec2576ef26cb87c3b27fc9b04217963669c18e44f62bb3cfcd46f2c"} Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.861867 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e05f61cfdec2576ef26cb87c3b27fc9b04217963669c18e44f62bb3cfcd46f2c" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.932951 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv"] Feb 18 12:01:40 crc kubenswrapper[4922]: E0218 12:01:40.933438 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ba745d-df3b-42c0-a384-ca64c96dd47f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.933456 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ba745d-df3b-42c0-a384-ca64c96dd47f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.933683 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ba745d-df3b-42c0-a384-ca64c96dd47f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.934356 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.937317 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.937969 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.938182 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.940132 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.950082 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv"] Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.985686 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.985736 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5bp\" (UniqueName: \"kubernetes.io/projected/2685dd3b-59b6-4879-b59a-215b187b1344-kube-api-access-qp5bp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.985758 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:40 crc kubenswrapper[4922]: I0218 12:01:40.985887 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.087570 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.087719 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.087743 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp5bp\" (UniqueName: \"kubernetes.io/projected/2685dd3b-59b6-4879-b59a-215b187b1344-kube-api-access-qp5bp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.087760 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.091128 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.091134 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.091531 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.107490 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp5bp\" (UniqueName: \"kubernetes.io/projected/2685dd3b-59b6-4879-b59a-215b187b1344-kube-api-access-qp5bp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.251578 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:01:41 crc kubenswrapper[4922]: W0218 12:01:41.774145 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2685dd3b_59b6_4879_b59a_215b187b1344.slice/crio-d8930c40b05ae338628746413aafafd1402c2f3d896ff0e8171acc0037b589fd WatchSource:0}: Error finding container d8930c40b05ae338628746413aafafd1402c2f3d896ff0e8171acc0037b589fd: Status 404 returned error can't find the container with id d8930c40b05ae338628746413aafafd1402c2f3d896ff0e8171acc0037b589fd Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.784016 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv"] Feb 18 12:01:41 crc kubenswrapper[4922]: I0218 12:01:41.873117 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" event={"ID":"2685dd3b-59b6-4879-b59a-215b187b1344","Type":"ContainerStarted","Data":"d8930c40b05ae338628746413aafafd1402c2f3d896ff0e8171acc0037b589fd"} Feb 18 12:01:42 crc kubenswrapper[4922]: I0218 12:01:42.883229 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" event={"ID":"2685dd3b-59b6-4879-b59a-215b187b1344","Type":"ContainerStarted","Data":"753438b91bc2c108c2317277ef49245049570003d115f3ff5c156e26e54c9647"} Feb 18 12:01:42 crc kubenswrapper[4922]: I0218 12:01:42.908874 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" podStartSLOduration=2.5283173420000002 podStartE2EDuration="2.908852161s" podCreationTimestamp="2026-02-18 12:01:40 +0000 UTC" firstStartedPulling="2026-02-18 12:01:41.777181847 +0000 UTC m=+1503.504885927" lastFinishedPulling="2026-02-18 12:01:42.157716666 +0000 UTC m=+1503.885420746" observedRunningTime="2026-02-18 12:01:42.898437127 +0000 UTC m=+1504.626141207" watchObservedRunningTime="2026-02-18 12:01:42.908852161 +0000 UTC m=+1504.636556241" Feb 18 12:01:45 crc kubenswrapper[4922]: I0218 12:01:45.490049 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:45 crc kubenswrapper[4922]: I0218 12:01:45.490462 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:45 crc kubenswrapper[4922]: I0218 12:01:45.543144 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:45 crc kubenswrapper[4922]: I0218 12:01:45.953582 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:45 crc kubenswrapper[4922]: I0218 12:01:45.997547 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s6f4m"] Feb 18 12:01:47 crc kubenswrapper[4922]: I0218 12:01:47.932889 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s6f4m" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerName="registry-server" containerID="cri-o://a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a" gracePeriod=2 Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.403012 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.533838 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-catalog-content\") pod \"05c02623-ca55-4852-ac9f-1415e7d3abad\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.533911 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lhpc\" (UniqueName: \"kubernetes.io/projected/05c02623-ca55-4852-ac9f-1415e7d3abad-kube-api-access-5lhpc\") pod \"05c02623-ca55-4852-ac9f-1415e7d3abad\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.534015 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-utilities\") pod \"05c02623-ca55-4852-ac9f-1415e7d3abad\" (UID: \"05c02623-ca55-4852-ac9f-1415e7d3abad\") " Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.535161 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-utilities" (OuterVolumeSpecName: "utilities") pod "05c02623-ca55-4852-ac9f-1415e7d3abad" (UID: "05c02623-ca55-4852-ac9f-1415e7d3abad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.539832 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c02623-ca55-4852-ac9f-1415e7d3abad-kube-api-access-5lhpc" (OuterVolumeSpecName: "kube-api-access-5lhpc") pod "05c02623-ca55-4852-ac9f-1415e7d3abad" (UID: "05c02623-ca55-4852-ac9f-1415e7d3abad"). InnerVolumeSpecName "kube-api-access-5lhpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.586277 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05c02623-ca55-4852-ac9f-1415e7d3abad" (UID: "05c02623-ca55-4852-ac9f-1415e7d3abad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.636713 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.636766 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05c02623-ca55-4852-ac9f-1415e7d3abad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.636787 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lhpc\" (UniqueName: \"kubernetes.io/projected/05c02623-ca55-4852-ac9f-1415e7d3abad-kube-api-access-5lhpc\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.944609 4922 generic.go:334] "Generic (PLEG): container finished" podID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerID="a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a" exitCode=0 Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.944679 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6f4m" event={"ID":"05c02623-ca55-4852-ac9f-1415e7d3abad","Type":"ContainerDied","Data":"a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a"} Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.944681 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6f4m" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.944737 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6f4m" event={"ID":"05c02623-ca55-4852-ac9f-1415e7d3abad","Type":"ContainerDied","Data":"db4c510ec11e469c18d71a0c6c6742633749b823ee5c8d01a1d647d346beade0"} Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.944776 4922 scope.go:117] "RemoveContainer" containerID="a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.969006 4922 scope.go:117] "RemoveContainer" containerID="b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec" Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.986712 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s6f4m"] Feb 18 12:01:48 crc kubenswrapper[4922]: I0218 12:01:48.996719 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s6f4m"] Feb 18 12:01:49 crc kubenswrapper[4922]: I0218 12:01:49.001705 4922 scope.go:117] "RemoveContainer" containerID="5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89" Feb 18 12:01:49 crc kubenswrapper[4922]: I0218 12:01:49.053175 4922 scope.go:117] "RemoveContainer" containerID="a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a" Feb 18 12:01:49 crc kubenswrapper[4922]: E0218 12:01:49.053783 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a\": container with ID starting with a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a not found: ID does not exist" containerID="a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a" Feb 18 12:01:49 crc kubenswrapper[4922]: I0218 12:01:49.053828 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a"} err="failed to get container status \"a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a\": rpc error: code = NotFound desc = could not find container \"a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a\": container with ID starting with a289f1679934ad5340d76b537a689e2bc8e2dc588042505fe99286b54adb293a not found: ID does not exist" Feb 18 12:01:49 crc kubenswrapper[4922]: I0218 12:01:49.053856 4922 scope.go:117] "RemoveContainer" containerID="b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec" Feb 18 12:01:49 crc kubenswrapper[4922]: E0218 12:01:49.054110 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec\": container with ID starting with b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec not found: ID does not exist" containerID="b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec" Feb 18 12:01:49 crc kubenswrapper[4922]: I0218 12:01:49.054132 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec"} err="failed to get container status \"b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec\": rpc error: code = NotFound desc = could not find container \"b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec\": container with ID starting with b2420c2b091c5665f89700b0399e1d80eed8c610e604c4b313e169f55232a9ec not found: ID does not exist" Feb 18 12:01:49 crc kubenswrapper[4922]: I0218 12:01:49.054148 4922 scope.go:117] "RemoveContainer" containerID="5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89" Feb 18 12:01:49 crc kubenswrapper[4922]: E0218 12:01:49.054400 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89\": container with ID starting with 5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89 not found: ID does not exist" containerID="5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89" Feb 18 12:01:49 crc kubenswrapper[4922]: I0218 12:01:49.054441 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89"} err="failed to get container status \"5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89\": rpc error: code = NotFound desc = could not find container \"5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89\": container with ID starting with 5a30dcc68908a23a96146c41c7a80874cf0e57089acae944a8f86d7b46b07b89 not found: ID does not exist" Feb 18 12:01:50 crc kubenswrapper[4922]: I0218 12:01:50.996555 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" path="/var/lib/kubelet/pods/05c02623-ca55-4852-ac9f-1415e7d3abad/volumes" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.429686 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bcvc8"] Feb 18 12:01:57 crc kubenswrapper[4922]: E0218 12:01:57.431895 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerName="extract-content" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.432005 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerName="extract-content" Feb 18 12:01:57 crc kubenswrapper[4922]: E0218 12:01:57.432100 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerName="extract-utilities" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.432174 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerName="extract-utilities" Feb 18 12:01:57 crc kubenswrapper[4922]: E0218 12:01:57.432242 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerName="registry-server" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.432295 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerName="registry-server" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.432625 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c02623-ca55-4852-ac9f-1415e7d3abad" containerName="registry-server" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.434292 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.446564 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcvc8"] Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.609256 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-catalog-content\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.609523 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-utilities\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.609625 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5bs7\" (UniqueName: \"kubernetes.io/projected/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-kube-api-access-v5bs7\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.711211 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-catalog-content\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.711260 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-utilities\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.711286 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5bs7\" (UniqueName: \"kubernetes.io/projected/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-kube-api-access-v5bs7\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.711670 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-catalog-content\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.711733 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-utilities\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.729904 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5bs7\" (UniqueName: \"kubernetes.io/projected/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-kube-api-access-v5bs7\") pod \"redhat-marketplace-bcvc8\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:57 crc kubenswrapper[4922]: I0218 12:01:57.761743 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:01:58 crc kubenswrapper[4922]: I0218 12:01:58.268727 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcvc8"] Feb 18 12:01:59 crc kubenswrapper[4922]: I0218 12:01:59.037963 4922 generic.go:334] "Generic (PLEG): container finished" podID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerID="5b0a2a7731dab0112315f444a33e408175b5c28abb2f69079912cabb44e2557e" exitCode=0 Feb 18 12:01:59 crc kubenswrapper[4922]: I0218 12:01:59.038057 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcvc8" event={"ID":"208a6c21-2dd8-4f3d-8ca6-b767d89bb091","Type":"ContainerDied","Data":"5b0a2a7731dab0112315f444a33e408175b5c28abb2f69079912cabb44e2557e"} Feb 18 12:01:59 crc kubenswrapper[4922]: I0218 12:01:59.038281 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcvc8" event={"ID":"208a6c21-2dd8-4f3d-8ca6-b767d89bb091","Type":"ContainerStarted","Data":"f9aa604d477338624553d6467acab8f953e16b7d54b6e61af455d59803710c81"} Feb 18 12:02:01 crc kubenswrapper[4922]: I0218 12:02:01.079426 4922 generic.go:334] "Generic (PLEG): container finished" podID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerID="20488b03c617a43634c46564fbbe667e0a9ed54d02bba1ba2ca81cbbea0dad2c" exitCode=0 Feb 18 12:02:01 crc kubenswrapper[4922]: I0218 12:02:01.079508 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcvc8" event={"ID":"208a6c21-2dd8-4f3d-8ca6-b767d89bb091","Type":"ContainerDied","Data":"20488b03c617a43634c46564fbbe667e0a9ed54d02bba1ba2ca81cbbea0dad2c"} Feb 18 12:02:02 crc kubenswrapper[4922]: I0218 12:02:02.091303 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcvc8" event={"ID":"208a6c21-2dd8-4f3d-8ca6-b767d89bb091","Type":"ContainerStarted","Data":"f4bd75adca76d727b7937c3ad606434e562eddda6632e0e9b7e47844174dbe7d"} Feb 18 12:02:07 crc kubenswrapper[4922]: I0218 12:02:07.762257 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:02:07 crc kubenswrapper[4922]: I0218 12:02:07.762890 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:02:07 crc kubenswrapper[4922]: I0218 12:02:07.821528 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:02:07 crc kubenswrapper[4922]: I0218 12:02:07.842187 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bcvc8" podStartSLOduration=8.368031399 podStartE2EDuration="10.842168601s" podCreationTimestamp="2026-02-18 12:01:57 +0000 UTC" firstStartedPulling="2026-02-18 12:01:59.040784853 +0000 UTC m=+1520.768488933" lastFinishedPulling="2026-02-18 12:02:01.514922055 +0000 UTC m=+1523.242626135" observedRunningTime="2026-02-18 12:02:02.116275641 +0000 UTC m=+1523.843979711" watchObservedRunningTime="2026-02-18 12:02:07.842168601 +0000 UTC m=+1529.569872681" Feb 18 12:02:08 crc kubenswrapper[4922]: I0218 12:02:08.203706 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:02:08 crc kubenswrapper[4922]: I0218 12:02:08.250126 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcvc8"] Feb 18 12:02:10 crc kubenswrapper[4922]: I0218 12:02:10.022964 4922 scope.go:117] "RemoveContainer" containerID="ffcb369954fde5f2470b893096827224a7452c99e4a60328dd0f303414ba87d8" Feb 18 12:02:10 crc kubenswrapper[4922]: I0218 12:02:10.181653 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bcvc8" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerName="registry-server" containerID="cri-o://f4bd75adca76d727b7937c3ad606434e562eddda6632e0e9b7e47844174dbe7d" gracePeriod=2 Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.197200 4922 generic.go:334] "Generic (PLEG): container finished" podID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerID="f4bd75adca76d727b7937c3ad606434e562eddda6632e0e9b7e47844174dbe7d" exitCode=0 Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.197309 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcvc8" event={"ID":"208a6c21-2dd8-4f3d-8ca6-b767d89bb091","Type":"ContainerDied","Data":"f4bd75adca76d727b7937c3ad606434e562eddda6632e0e9b7e47844174dbe7d"} Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.197483 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bcvc8" event={"ID":"208a6c21-2dd8-4f3d-8ca6-b767d89bb091","Type":"ContainerDied","Data":"f9aa604d477338624553d6467acab8f953e16b7d54b6e61af455d59803710c81"} Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.197503 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9aa604d477338624553d6467acab8f953e16b7d54b6e61af455d59803710c81" Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.202818 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.295200 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-utilities\") pod \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.295336 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5bs7\" (UniqueName: \"kubernetes.io/projected/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-kube-api-access-v5bs7\") pod \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.295635 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-catalog-content\") pod \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\" (UID: \"208a6c21-2dd8-4f3d-8ca6-b767d89bb091\") " Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.300294 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-utilities" (OuterVolumeSpecName: "utilities") pod "208a6c21-2dd8-4f3d-8ca6-b767d89bb091" (UID: "208a6c21-2dd8-4f3d-8ca6-b767d89bb091"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.307304 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-kube-api-access-v5bs7" (OuterVolumeSpecName: "kube-api-access-v5bs7") pod "208a6c21-2dd8-4f3d-8ca6-b767d89bb091" (UID: "208a6c21-2dd8-4f3d-8ca6-b767d89bb091"). InnerVolumeSpecName "kube-api-access-v5bs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.323416 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "208a6c21-2dd8-4f3d-8ca6-b767d89bb091" (UID: "208a6c21-2dd8-4f3d-8ca6-b767d89bb091"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.398732 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.398760 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:02:11 crc kubenswrapper[4922]: I0218 12:02:11.398770 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5bs7\" (UniqueName: \"kubernetes.io/projected/208a6c21-2dd8-4f3d-8ca6-b767d89bb091-kube-api-access-v5bs7\") on node \"crc\" DevicePath \"\"" Feb 18 12:02:12 crc kubenswrapper[4922]: I0218 12:02:12.207218 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bcvc8" Feb 18 12:02:12 crc kubenswrapper[4922]: I0218 12:02:12.248264 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcvc8"] Feb 18 12:02:12 crc kubenswrapper[4922]: I0218 12:02:12.263836 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bcvc8"] Feb 18 12:02:12 crc kubenswrapper[4922]: I0218 12:02:12.986909 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" path="/var/lib/kubelet/pods/208a6c21-2dd8-4f3d-8ca6-b767d89bb091/volumes" Feb 18 12:03:10 crc kubenswrapper[4922]: I0218 12:03:10.143315 4922 scope.go:117] "RemoveContainer" containerID="1c2064f4a7cb5c8c928c435089f458b706b8fa933a0140eb0fd478bbb13155df" Feb 18 12:03:10 crc kubenswrapper[4922]: I0218 12:03:10.168303 4922 scope.go:117] "RemoveContainer" containerID="08eea4a1ca654ec0f46724358fe53403626c28895fe7bd0802a7388b3e60a117" Feb 18 12:03:39 crc kubenswrapper[4922]: I0218 12:03:39.807452 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:03:39 crc kubenswrapper[4922]: I0218 12:03:39.808034 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.052071 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a3b1-account-create-update-5qfd8"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.067384 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mqx2n"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.079224 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-95b8-account-create-update-58r6z"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.090790 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2714-account-create-update-j5l9f"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.116076 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-dd51-account-create-update-fr8ml"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.133197 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2zkj4"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.143790 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a3b1-account-create-update-5qfd8"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.155072 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mqx2n"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.164724 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-499z7"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.173863 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-xj7zt"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.183968 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-dd51-account-create-update-fr8ml"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.193877 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2714-account-create-update-j5l9f"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.203547 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-xj7zt"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.213115 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2zkj4"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.222469 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-499z7"] Feb 18 12:04:03 crc kubenswrapper[4922]: I0218 12:04:03.232589 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-95b8-account-create-update-58r6z"] Feb 18 12:04:04 crc kubenswrapper[4922]: I0218 12:04:04.985774 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f03ea4-2462-4f2c-b9b8-395fc9802993" path="/var/lib/kubelet/pods/05f03ea4-2462-4f2c-b9b8-395fc9802993/volumes" Feb 18 12:04:04 crc kubenswrapper[4922]: I0218 12:04:04.986973 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3c9160-dd6d-4591-9554-d3c74df3a64e" path="/var/lib/kubelet/pods/0d3c9160-dd6d-4591-9554-d3c74df3a64e/volumes" Feb 18 12:04:04 crc kubenswrapper[4922]: I0218 12:04:04.987695 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b15fbe3-8f30-41e8-8897-037694ccb56b" path="/var/lib/kubelet/pods/3b15fbe3-8f30-41e8-8897-037694ccb56b/volumes" Feb 18 12:04:04 crc kubenswrapper[4922]: I0218 12:04:04.988260 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e854dba-d50f-4228-9b7a-c8a0ae16347a" path="/var/lib/kubelet/pods/3e854dba-d50f-4228-9b7a-c8a0ae16347a/volumes" Feb 18 12:04:04 crc kubenswrapper[4922]: I0218 12:04:04.989352 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811ffd65-f5dc-44a3-a1cb-778937ca9771" path="/var/lib/kubelet/pods/811ffd65-f5dc-44a3-a1cb-778937ca9771/volumes" Feb 18 12:04:04 crc kubenswrapper[4922]: I0218 12:04:04.989876 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85eec6a5-292b-4061-bb90-18904535d9cc" path="/var/lib/kubelet/pods/85eec6a5-292b-4061-bb90-18904535d9cc/volumes" Feb 18 12:04:04 crc kubenswrapper[4922]: I0218 12:04:04.990425 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac3a541-a2f7-4d95-97ff-1361fbd3e81e" path="/var/lib/kubelet/pods/cac3a541-a2f7-4d95-97ff-1361fbd3e81e/volumes" Feb 18 12:04:04 crc kubenswrapper[4922]: I0218 12:04:04.991560 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc452273-8a5f-47d8-8aa5-1ddfe2240e28" path="/var/lib/kubelet/pods/cc452273-8a5f-47d8-8aa5-1ddfe2240e28/volumes" Feb 18 12:04:09 crc kubenswrapper[4922]: I0218 12:04:09.807622 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:04:09 crc kubenswrapper[4922]: I0218 12:04:09.808862 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.221244 4922 scope.go:117] "RemoveContainer" containerID="c34444b452577c8ce28445ed13810cd08c00c27fb0fdc17083d4a2fd0f78af8b" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.259159 4922 scope.go:117] "RemoveContainer" containerID="cca6d9a35f7a3b90b18a7c03d2317544897c0eada4e7ac4555360fd3747c2a53" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.322535 4922 scope.go:117] "RemoveContainer" containerID="d7cb0d481469976e2f0d27c45a95752ec61c6325990c025f51f551cbf42c3d51" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.392578 4922 scope.go:117] "RemoveContainer" containerID="e6310d509175d11956cf35f2881cac092138c8da93939b01d64f0006c350cdcc" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.415878 4922 scope.go:117] "RemoveContainer" containerID="a445d9cdaa7af5fce54eb1734c1edd288e3e96655dfca8e174ddcae4e353e89d" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.489352 4922 scope.go:117] "RemoveContainer" containerID="ede12ae546b38989e02144d9c29b45bfe17c0490fbfcd99cc1a79c54dc349009" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.536534 4922 scope.go:117] "RemoveContainer" containerID="fedb74d1b6b4c2f0bd2aef2df515af61a8cc0caff9248cf99b996d4ac610fc62" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.584093 4922 scope.go:117] "RemoveContainer" containerID="e7a3b382cba61f7101e30bbe239f4c456f983c4335448f6fab20d5e184472620" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.639288 4922 scope.go:117] "RemoveContainer" containerID="6c511819319a5fc193d0f75ed63d9083c79d2c12407691912b456da48212acab" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.663697 4922 scope.go:117] "RemoveContainer" containerID="019a6342d78a17e9007569b8df388ae4cd83074ceff02bfe55eb5c3a71054609" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.694424 4922 scope.go:117] "RemoveContainer" containerID="d21236550250b2958c4054030789e8473894e0d1d7c3c12b25573ed942732eed" Feb 18 12:04:10 crc kubenswrapper[4922]: I0218 12:04:10.719924 4922 scope.go:117] "RemoveContainer" containerID="38f62ebe43eed17090600fd985ab87c725adb4a3b86d21051e6be95923794e24" Feb 18 12:04:35 crc kubenswrapper[4922]: I0218 12:04:35.054125 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-czsfv"] Feb 18 12:04:35 crc kubenswrapper[4922]: I0218 12:04:35.067996 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b89-account-create-update-x8q45"] Feb 18 12:04:35 crc kubenswrapper[4922]: I0218 12:04:35.081919 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mvrlh"] Feb 18 12:04:35 crc kubenswrapper[4922]: I0218 12:04:35.090311 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mvrlh"] Feb 18 12:04:35 crc kubenswrapper[4922]: I0218 12:04:35.099148 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-czsfv"] Feb 18 12:04:35 crc kubenswrapper[4922]: I0218 12:04:35.107515 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b89-account-create-update-x8q45"] Feb 18 12:04:35 crc kubenswrapper[4922]: I0218 12:04:35.116433 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f0db-account-create-update-vwv99"] Feb 18 12:04:35 crc kubenswrapper[4922]: I0218 12:04:35.126611 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f0db-account-create-update-vwv99"] Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.031006 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-g95qz"] Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.040797 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-59fc-account-create-update-29wvh"] Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.050827 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-59fc-account-create-update-29wvh"] Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.061949 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-g95qz"] Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.990240 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f413eca-d25a-4b47-82f6-e25088b65f2d" path="/var/lib/kubelet/pods/3f413eca-d25a-4b47-82f6-e25088b65f2d/volumes" Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.991844 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a88eb1-e58a-437e-b1eb-2dcb7e80b37f" path="/var/lib/kubelet/pods/46a88eb1-e58a-437e-b1eb-2dcb7e80b37f/volumes" Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.992699 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb" path="/var/lib/kubelet/pods/76ba1bc1-3352-42a4-a80e-2fc2ac0e66eb/volumes" Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.993290 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87604619-ec13-480d-9456-c5062685287d" path="/var/lib/kubelet/pods/87604619-ec13-480d-9456-c5062685287d/volumes" Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.994892 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b521417c-1968-49ee-8435-9e44af7e8a52" path="/var/lib/kubelet/pods/b521417c-1968-49ee-8435-9e44af7e8a52/volumes" Feb 18 12:04:36 crc kubenswrapper[4922]: I0218 12:04:36.995802 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e55f2d-153c-47a0-95c4-84f8795ca57e" path="/var/lib/kubelet/pods/b9e55f2d-153c-47a0-95c4-84f8795ca57e/volumes" Feb 18 12:04:39 crc kubenswrapper[4922]: I0218 12:04:39.807948 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:04:39 crc kubenswrapper[4922]: I0218 12:04:39.808275 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:04:39 crc kubenswrapper[4922]: I0218 12:04:39.808341 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:04:39 crc kubenswrapper[4922]: I0218 12:04:39.809254 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:04:39 crc kubenswrapper[4922]: I0218 12:04:39.809330 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" gracePeriod=600 Feb 18 12:04:39 crc kubenswrapper[4922]: E0218 12:04:39.945752 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:04:40 crc kubenswrapper[4922]: I0218 12:04:40.813375 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" exitCode=0 Feb 18 12:04:40 crc kubenswrapper[4922]: I0218 12:04:40.813423 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df"} Feb 18 12:04:40 crc kubenswrapper[4922]: I0218 12:04:40.813802 4922 scope.go:117] "RemoveContainer" containerID="ef7998df4ff2ac956dafb01bf87962d308e4ed2ee7ceb57165a1d59bde7c799c" Feb 18 12:04:40 crc kubenswrapper[4922]: I0218 12:04:40.814579 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:04:40 crc kubenswrapper[4922]: E0218 12:04:40.814910 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:04:51 crc kubenswrapper[4922]: I0218 12:04:51.973703 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:04:51 crc kubenswrapper[4922]: E0218 12:04:51.974650 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:04:52 crc kubenswrapper[4922]: I0218 12:04:52.960757 4922 generic.go:334] "Generic (PLEG): container finished" podID="2685dd3b-59b6-4879-b59a-215b187b1344" containerID="753438b91bc2c108c2317277ef49245049570003d115f3ff5c156e26e54c9647" exitCode=0 Feb 18 12:04:52 crc kubenswrapper[4922]: I0218 12:04:52.960830 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" event={"ID":"2685dd3b-59b6-4879-b59a-215b187b1344","Type":"ContainerDied","Data":"753438b91bc2c108c2317277ef49245049570003d115f3ff5c156e26e54c9647"} Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.409568 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.552084 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp5bp\" (UniqueName: \"kubernetes.io/projected/2685dd3b-59b6-4879-b59a-215b187b1344-kube-api-access-qp5bp\") pod \"2685dd3b-59b6-4879-b59a-215b187b1344\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.552560 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-inventory\") pod \"2685dd3b-59b6-4879-b59a-215b187b1344\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.552734 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-ssh-key-openstack-edpm-ipam\") pod \"2685dd3b-59b6-4879-b59a-215b187b1344\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.552797 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-bootstrap-combined-ca-bundle\") pod \"2685dd3b-59b6-4879-b59a-215b187b1344\" (UID: \"2685dd3b-59b6-4879-b59a-215b187b1344\") " Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.563796 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2685dd3b-59b6-4879-b59a-215b187b1344-kube-api-access-qp5bp" (OuterVolumeSpecName: "kube-api-access-qp5bp") pod "2685dd3b-59b6-4879-b59a-215b187b1344" (UID: "2685dd3b-59b6-4879-b59a-215b187b1344"). InnerVolumeSpecName "kube-api-access-qp5bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.572609 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2685dd3b-59b6-4879-b59a-215b187b1344" (UID: "2685dd3b-59b6-4879-b59a-215b187b1344"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.589757 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-inventory" (OuterVolumeSpecName: "inventory") pod "2685dd3b-59b6-4879-b59a-215b187b1344" (UID: "2685dd3b-59b6-4879-b59a-215b187b1344"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.592049 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2685dd3b-59b6-4879-b59a-215b187b1344" (UID: "2685dd3b-59b6-4879-b59a-215b187b1344"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.657866 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.657943 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.657988 4922 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2685dd3b-59b6-4879-b59a-215b187b1344-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.658006 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp5bp\" (UniqueName: \"kubernetes.io/projected/2685dd3b-59b6-4879-b59a-215b187b1344-kube-api-access-qp5bp\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.986609 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.998240 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv" event={"ID":"2685dd3b-59b6-4879-b59a-215b187b1344","Type":"ContainerDied","Data":"d8930c40b05ae338628746413aafafd1402c2f3d896ff0e8171acc0037b589fd"} Feb 18 12:04:54 crc kubenswrapper[4922]: I0218 12:04:54.998338 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8930c40b05ae338628746413aafafd1402c2f3d896ff0e8171acc0037b589fd" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.105289 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m"] Feb 18 12:04:55 crc kubenswrapper[4922]: E0218 12:04:55.106072 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2685dd3b-59b6-4879-b59a-215b187b1344" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.106103 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="2685dd3b-59b6-4879-b59a-215b187b1344" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 12:04:55 crc kubenswrapper[4922]: E0218 12:04:55.106118 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerName="registry-server" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.106125 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerName="registry-server" Feb 18 12:04:55 crc kubenswrapper[4922]: E0218 12:04:55.106179 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerName="extract-content" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.106191 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerName="extract-content" Feb 18 12:04:55 crc kubenswrapper[4922]: E0218 12:04:55.106209 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerName="extract-utilities" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.106218 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerName="extract-utilities" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.106656 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="208a6c21-2dd8-4f3d-8ca6-b767d89bb091" containerName="registry-server" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.106683 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="2685dd3b-59b6-4879-b59a-215b187b1344" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.107701 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.118528 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m"] Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.149860 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.150470 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.150490 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.150770 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.169950 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.170090 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.170165 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h8gh\" (UniqueName: \"kubernetes.io/projected/28a59f5e-155a-44b9-827a-a48bf1615d3d-kube-api-access-8h8gh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.272056 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.272210 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.272291 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h8gh\" (UniqueName: \"kubernetes.io/projected/28a59f5e-155a-44b9-827a-a48bf1615d3d-kube-api-access-8h8gh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.283128 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.283323 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.292741 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h8gh\" (UniqueName: \"kubernetes.io/projected/28a59f5e-155a-44b9-827a-a48bf1615d3d-kube-api-access-8h8gh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7b67m\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:55 crc kubenswrapper[4922]: I0218 12:04:55.476879 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:04:56 crc kubenswrapper[4922]: I0218 12:04:56.044439 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m"] Feb 18 12:04:56 crc kubenswrapper[4922]: I0218 12:04:56.054568 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:04:57 crc kubenswrapper[4922]: I0218 12:04:57.011345 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" event={"ID":"28a59f5e-155a-44b9-827a-a48bf1615d3d","Type":"ContainerStarted","Data":"e4247a54fe570c27479d2bc1e0c4442c1e54068086222ac8518d6232db6583ea"} Feb 18 12:04:57 crc kubenswrapper[4922]: I0218 12:04:57.012102 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" event={"ID":"28a59f5e-155a-44b9-827a-a48bf1615d3d","Type":"ContainerStarted","Data":"18ae4f5c65cb3d7731a9211b2c1da7f253992b418f856b813ce8c0520557a5a6"} Feb 18 12:04:57 crc kubenswrapper[4922]: I0218 12:04:57.039435 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" podStartSLOduration=1.515953669 podStartE2EDuration="2.039402143s" podCreationTimestamp="2026-02-18 12:04:55 +0000 UTC" firstStartedPulling="2026-02-18 12:04:56.054185995 +0000 UTC m=+1697.781890075" lastFinishedPulling="2026-02-18 12:04:56.577634469 +0000 UTC m=+1698.305338549" observedRunningTime="2026-02-18 12:04:57.03138805 +0000 UTC m=+1698.759092130" watchObservedRunningTime="2026-02-18 12:04:57.039402143 +0000 UTC m=+1698.767106223" Feb 18 12:04:57 crc kubenswrapper[4922]: I0218 12:04:57.064854 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xhpvj"] Feb 18 12:04:57 crc kubenswrapper[4922]: I0218 12:04:57.078714 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xhpvj"] Feb 18 12:04:58 crc kubenswrapper[4922]: I0218 12:04:58.989840 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056" path="/var/lib/kubelet/pods/3f2ad2ed-7e29-4760-aa9f-e5bf6bc96056/volumes" Feb 18 12:05:03 crc kubenswrapper[4922]: I0218 12:05:03.973461 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:05:03 crc kubenswrapper[4922]: E0218 12:05:03.974900 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:05:10 crc kubenswrapper[4922]: I0218 12:05:10.950396 4922 scope.go:117] "RemoveContainer" containerID="c059c2685304c1bbe7cb0f5bf34e25f7ad09441877372a8258fba3155ab8954a" Feb 18 12:05:11 crc kubenswrapper[4922]: I0218 12:05:11.000682 4922 scope.go:117] "RemoveContainer" containerID="3b378847d4d22bec8f5e44e34e16024756ec3850a6ac282a1c69fb4c7cbed59f" Feb 18 12:05:11 crc kubenswrapper[4922]: I0218 12:05:11.037699 4922 scope.go:117] "RemoveContainer" containerID="c14ea122843eb522430115cdefefb0c3b7d6b58a0e5c203b5fb2fca56b1e56ba" Feb 18 12:05:11 crc kubenswrapper[4922]: I0218 12:05:11.094085 4922 scope.go:117] "RemoveContainer" containerID="2773d6c651dc9198f969541910251805f8f5c022eea560495fc1c18971e69a25" Feb 18 12:05:11 crc kubenswrapper[4922]: I0218 12:05:11.117976 4922 scope.go:117] "RemoveContainer" containerID="0f464cbb217061ebc522def4ee4f957ba0ab7d4cce7acd97b0cad24b01e4e4cc" Feb 18 12:05:11 crc kubenswrapper[4922]: I0218 12:05:11.176756 4922 scope.go:117] "RemoveContainer" containerID="459c3dccc59cc9a7a25c5999d57a9a40f0164adf154409ca83fde7883515bc8e" Feb 18 12:05:11 crc kubenswrapper[4922]: I0218 12:05:11.229729 4922 scope.go:117] "RemoveContainer" containerID="f872729a376e1215c5740d39fd4f7a50b0af6397c21fc8ffd7ccc6655e4b95bf" Feb 18 12:05:11 crc kubenswrapper[4922]: I0218 12:05:11.290713 4922 scope.go:117] "RemoveContainer" containerID="cb46a05482d6c2364b368bb1c2e067b6de93db6a4072db86c206647939a79206" Feb 18 12:05:17 crc kubenswrapper[4922]: I0218 12:05:17.974085 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:05:17 crc kubenswrapper[4922]: E0218 12:05:17.975793 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:05:18 crc kubenswrapper[4922]: I0218 12:05:18.055465 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fnkcj"] Feb 18 12:05:18 crc kubenswrapper[4922]: I0218 12:05:18.066712 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fnkcj"] Feb 18 12:05:18 crc kubenswrapper[4922]: I0218 12:05:18.084904 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-sznrv"] Feb 18 12:05:18 crc kubenswrapper[4922]: I0218 12:05:18.100821 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-sznrv"] Feb 18 12:05:18 crc kubenswrapper[4922]: I0218 12:05:18.987552 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2102ef9b-8151-4edf-8b43-7c4486203911" path="/var/lib/kubelet/pods/2102ef9b-8151-4edf-8b43-7c4486203911/volumes" Feb 18 12:05:18 crc kubenswrapper[4922]: I0218 12:05:18.988110 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f835c05-4bbb-4678-9410-8523cf308f05" path="/var/lib/kubelet/pods/5f835c05-4bbb-4678-9410-8523cf308f05/volumes" Feb 18 12:05:31 crc kubenswrapper[4922]: I0218 12:05:31.973027 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:05:31 crc kubenswrapper[4922]: E0218 12:05:31.973811 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:05:43 crc kubenswrapper[4922]: I0218 12:05:43.974087 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:05:43 crc kubenswrapper[4922]: E0218 12:05:43.974875 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:05:54 crc kubenswrapper[4922]: I0218 12:05:54.973477 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:05:54 crc kubenswrapper[4922]: E0218 12:05:54.974223 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:06:08 crc kubenswrapper[4922]: I0218 12:06:08.980612 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:06:08 crc kubenswrapper[4922]: E0218 12:06:08.982738 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:06:11 crc kubenswrapper[4922]: I0218 12:06:11.499242 4922 scope.go:117] "RemoveContainer" containerID="96a538608703e5f575eecc4d7943dd23e9a2c36f206b74a346320d63b212ab98" Feb 18 12:06:11 crc kubenswrapper[4922]: I0218 12:06:11.528251 4922 scope.go:117] "RemoveContainer" containerID="7b6ad8f3e92b6bbfd943139b3553e32910b71922866701c9884de500a3eaebeb" Feb 18 12:06:15 crc kubenswrapper[4922]: I0218 12:06:15.050737 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8pvj2"] Feb 18 12:06:15 crc kubenswrapper[4922]: I0218 12:06:15.066594 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8pvj2"] Feb 18 12:06:17 crc kubenswrapper[4922]: I0218 12:06:17.005116 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0" path="/var/lib/kubelet/pods/53cda7fd-c342-49d3-a5bc-8fcb3e09b3e0/volumes" Feb 18 12:06:23 crc kubenswrapper[4922]: I0218 12:06:23.973696 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:06:23 crc kubenswrapper[4922]: E0218 12:06:23.974554 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:06:37 crc kubenswrapper[4922]: I0218 12:06:37.973394 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:06:37 crc kubenswrapper[4922]: E0218 12:06:37.975543 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:06:45 crc kubenswrapper[4922]: I0218 12:06:45.042076 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-clz29"] Feb 18 12:06:45 crc kubenswrapper[4922]: I0218 12:06:45.054446 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-clz29"] Feb 18 12:06:46 crc kubenswrapper[4922]: I0218 12:06:46.994541 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71318c6d-61ee-4fb4-8682-7cf3fc0ae044" path="/var/lib/kubelet/pods/71318c6d-61ee-4fb4-8682-7cf3fc0ae044/volumes" Feb 18 12:06:47 crc kubenswrapper[4922]: I0218 12:06:47.030213 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-mqfhx"] Feb 18 12:06:47 crc kubenswrapper[4922]: I0218 12:06:47.041130 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-mqfhx"] Feb 18 12:06:48 crc kubenswrapper[4922]: I0218 12:06:48.986583 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31aad152-dcb7-472f-a0f8-d90ae972442b" path="/var/lib/kubelet/pods/31aad152-dcb7-472f-a0f8-d90ae972442b/volumes" Feb 18 12:06:50 crc kubenswrapper[4922]: I0218 12:06:50.973234 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:06:50 crc kubenswrapper[4922]: E0218 12:06:50.973991 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:06:53 crc kubenswrapper[4922]: I0218 12:06:53.040519 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rvpx7"] Feb 18 12:06:53 crc kubenswrapper[4922]: I0218 12:06:53.049408 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rvpx7"] Feb 18 12:06:54 crc kubenswrapper[4922]: I0218 12:06:54.987415 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7852f85-b8c5-458e-901c-3659c5ed2713" path="/var/lib/kubelet/pods/d7852f85-b8c5-458e-901c-3659c5ed2713/volumes" Feb 18 12:06:56 crc kubenswrapper[4922]: I0218 12:06:56.028578 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-st9pz"] Feb 18 12:06:56 crc kubenswrapper[4922]: I0218 12:06:56.039157 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-st9pz"] Feb 18 12:06:56 crc kubenswrapper[4922]: I0218 12:06:56.988422 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855fb3ec-e473-4a99-a94f-cc96dda6d9c4" path="/var/lib/kubelet/pods/855fb3ec-e473-4a99-a94f-cc96dda6d9c4/volumes" Feb 18 12:07:00 crc kubenswrapper[4922]: I0218 12:07:00.276207 4922 generic.go:334] "Generic (PLEG): container finished" podID="28a59f5e-155a-44b9-827a-a48bf1615d3d" containerID="e4247a54fe570c27479d2bc1e0c4442c1e54068086222ac8518d6232db6583ea" exitCode=0 Feb 18 12:07:00 crc kubenswrapper[4922]: I0218 12:07:00.276302 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" event={"ID":"28a59f5e-155a-44b9-827a-a48bf1615d3d","Type":"ContainerDied","Data":"e4247a54fe570c27479d2bc1e0c4442c1e54068086222ac8518d6232db6583ea"} Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.684105 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.819704 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h8gh\" (UniqueName: \"kubernetes.io/projected/28a59f5e-155a-44b9-827a-a48bf1615d3d-kube-api-access-8h8gh\") pod \"28a59f5e-155a-44b9-827a-a48bf1615d3d\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.819832 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-inventory\") pod \"28a59f5e-155a-44b9-827a-a48bf1615d3d\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.819860 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-ssh-key-openstack-edpm-ipam\") pod \"28a59f5e-155a-44b9-827a-a48bf1615d3d\" (UID: \"28a59f5e-155a-44b9-827a-a48bf1615d3d\") " Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.825925 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a59f5e-155a-44b9-827a-a48bf1615d3d-kube-api-access-8h8gh" (OuterVolumeSpecName: "kube-api-access-8h8gh") pod "28a59f5e-155a-44b9-827a-a48bf1615d3d" (UID: "28a59f5e-155a-44b9-827a-a48bf1615d3d"). InnerVolumeSpecName "kube-api-access-8h8gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.847161 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-inventory" (OuterVolumeSpecName: "inventory") pod "28a59f5e-155a-44b9-827a-a48bf1615d3d" (UID: "28a59f5e-155a-44b9-827a-a48bf1615d3d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.853351 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "28a59f5e-155a-44b9-827a-a48bf1615d3d" (UID: "28a59f5e-155a-44b9-827a-a48bf1615d3d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.922941 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.923051 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28a59f5e-155a-44b9-827a-a48bf1615d3d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:01 crc kubenswrapper[4922]: I0218 12:07:01.923119 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h8gh\" (UniqueName: \"kubernetes.io/projected/28a59f5e-155a-44b9-827a-a48bf1615d3d-kube-api-access-8h8gh\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.295446 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" event={"ID":"28a59f5e-155a-44b9-827a-a48bf1615d3d","Type":"ContainerDied","Data":"18ae4f5c65cb3d7731a9211b2c1da7f253992b418f856b813ce8c0520557a5a6"} Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.295492 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ae4f5c65cb3d7731a9211b2c1da7f253992b418f856b813ce8c0520557a5a6" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.295498 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7b67m" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.385459 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln"] Feb 18 12:07:02 crc kubenswrapper[4922]: E0218 12:07:02.385895 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a59f5e-155a-44b9-827a-a48bf1615d3d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.385914 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a59f5e-155a-44b9-827a-a48bf1615d3d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.386115 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a59f5e-155a-44b9-827a-a48bf1615d3d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.387049 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.396493 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln"] Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.548459 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9w4\" (UniqueName: \"kubernetes.io/projected/b2f62f96-5ba4-4d16-89d8-11ae5e941699-kube-api-access-tr9w4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.548587 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.548901 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.549724 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.549969 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.551019 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.551589 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.650855 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9w4\" (UniqueName: \"kubernetes.io/projected/b2f62f96-5ba4-4d16-89d8-11ae5e941699-kube-api-access-tr9w4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.651219 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.651332 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.656217 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.662854 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.673548 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9w4\" (UniqueName: \"kubernetes.io/projected/b2f62f96-5ba4-4d16-89d8-11ae5e941699-kube-api-access-tr9w4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:02 crc kubenswrapper[4922]: I0218 12:07:02.867232 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:07:03 crc kubenswrapper[4922]: I0218 12:07:03.420752 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln"] Feb 18 12:07:04 crc kubenswrapper[4922]: I0218 12:07:04.315531 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" event={"ID":"b2f62f96-5ba4-4d16-89d8-11ae5e941699","Type":"ContainerStarted","Data":"46c56531dc7f6fe8b2ba5d33360e8f5c404c054782958003bf246dc84e31961c"} Feb 18 12:07:04 crc kubenswrapper[4922]: I0218 12:07:04.316145 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" event={"ID":"b2f62f96-5ba4-4d16-89d8-11ae5e941699","Type":"ContainerStarted","Data":"13c41ae592beddfaa7d7f1100f4757a6653e55fee294002ac2782f785aade831"} Feb 18 12:07:05 crc kubenswrapper[4922]: I0218 12:07:05.973324 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:07:05 crc kubenswrapper[4922]: E0218 12:07:05.973898 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:07:11 crc kubenswrapper[4922]: I0218 12:07:11.650180 4922 scope.go:117] "RemoveContainer" containerID="fdbd37640dccec05f284f7ac0ade4661f3a31188c1203c2107b809735927d31b" Feb 18 12:07:11 crc kubenswrapper[4922]: I0218 12:07:11.686109 4922 scope.go:117] "RemoveContainer" containerID="a94ac8d35c2954541cdb5fa078a3d4a480cc9002ceb6fe5897faee87fe9e9f1f" Feb 18 12:07:11 crc kubenswrapper[4922]: I0218 12:07:11.728763 4922 scope.go:117] "RemoveContainer" containerID="f6b3d065bfd75b2cfcd638eb81caf2052d3e13f6ce152807bf20c4d48f24622e" Feb 18 12:07:11 crc kubenswrapper[4922]: I0218 12:07:11.776534 4922 scope.go:117] "RemoveContainer" containerID="5597fcea1ce088dbc75d5c2c33874c67f09dcb474426ad6372bb91fff5a29ef3" Feb 18 12:07:11 crc kubenswrapper[4922]: I0218 12:07:11.813388 4922 scope.go:117] "RemoveContainer" containerID="483ace8c8816a57983828e0b595d4b4e906ff56406e775bb5aac307ba89d5e30" Feb 18 12:07:19 crc kubenswrapper[4922]: I0218 12:07:19.044463 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" podStartSLOduration=16.482675762 podStartE2EDuration="17.044443223s" podCreationTimestamp="2026-02-18 12:07:02 +0000 UTC" firstStartedPulling="2026-02-18 12:07:03.432175086 +0000 UTC m=+1825.159879166" lastFinishedPulling="2026-02-18 12:07:03.993942557 +0000 UTC m=+1825.721646627" observedRunningTime="2026-02-18 12:07:04.334596427 +0000 UTC m=+1826.062300517" watchObservedRunningTime="2026-02-18 12:07:19.044443223 +0000 UTC m=+1840.772147303" Feb 18 12:07:19 crc kubenswrapper[4922]: I0218 12:07:19.050696 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zjb6x"] Feb 18 12:07:19 crc kubenswrapper[4922]: I0218 12:07:19.059917 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zjb6x"] Feb 18 12:07:20 crc kubenswrapper[4922]: I0218 12:07:20.973276 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:07:20 crc kubenswrapper[4922]: E0218 12:07:20.973639 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:07:20 crc kubenswrapper[4922]: I0218 12:07:20.991247 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bcd3608-244b-44f0-be1f-5d953cd35964" path="/var/lib/kubelet/pods/4bcd3608-244b-44f0-be1f-5d953cd35964/volumes" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.103209 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vklb7"] Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.118769 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.138507 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vklb7"] Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.233522 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm7gl\" (UniqueName: \"kubernetes.io/projected/ac7439c4-4267-4309-aae7-259734126f27-kube-api-access-lm7gl\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.233760 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-utilities\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.233955 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-catalog-content\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.335746 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm7gl\" (UniqueName: \"kubernetes.io/projected/ac7439c4-4267-4309-aae7-259734126f27-kube-api-access-lm7gl\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.335845 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-utilities\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.335906 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-catalog-content\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.336466 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-catalog-content\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.336946 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-utilities\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.356151 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm7gl\" (UniqueName: \"kubernetes.io/projected/ac7439c4-4267-4309-aae7-259734126f27-kube-api-access-lm7gl\") pod \"community-operators-vklb7\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.441067 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:22 crc kubenswrapper[4922]: I0218 12:07:22.903857 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vklb7"] Feb 18 12:07:23 crc kubenswrapper[4922]: I0218 12:07:23.482399 4922 generic.go:334] "Generic (PLEG): container finished" podID="ac7439c4-4267-4309-aae7-259734126f27" containerID="95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0" exitCode=0 Feb 18 12:07:23 crc kubenswrapper[4922]: I0218 12:07:23.482453 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vklb7" event={"ID":"ac7439c4-4267-4309-aae7-259734126f27","Type":"ContainerDied","Data":"95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0"} Feb 18 12:07:23 crc kubenswrapper[4922]: I0218 12:07:23.482488 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vklb7" event={"ID":"ac7439c4-4267-4309-aae7-259734126f27","Type":"ContainerStarted","Data":"e53febc7959bdd6c792cd27a5466ba3ccfc2726e0fe755d18fad2114a8a250b6"} Feb 18 12:07:24 crc kubenswrapper[4922]: I0218 12:07:24.494328 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vklb7" event={"ID":"ac7439c4-4267-4309-aae7-259734126f27","Type":"ContainerStarted","Data":"1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec"} Feb 18 12:07:25 crc kubenswrapper[4922]: I0218 12:07:25.506122 4922 generic.go:334] "Generic (PLEG): container finished" podID="ac7439c4-4267-4309-aae7-259734126f27" containerID="1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec" exitCode=0 Feb 18 12:07:25 crc kubenswrapper[4922]: I0218 12:07:25.506487 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vklb7" event={"ID":"ac7439c4-4267-4309-aae7-259734126f27","Type":"ContainerDied","Data":"1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec"} Feb 18 12:07:26 crc kubenswrapper[4922]: I0218 12:07:26.517699 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vklb7" event={"ID":"ac7439c4-4267-4309-aae7-259734126f27","Type":"ContainerStarted","Data":"f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf"} Feb 18 12:07:26 crc kubenswrapper[4922]: I0218 12:07:26.541948 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vklb7" podStartSLOduration=2.093982673 podStartE2EDuration="4.54192547s" podCreationTimestamp="2026-02-18 12:07:22 +0000 UTC" firstStartedPulling="2026-02-18 12:07:23.484081927 +0000 UTC m=+1845.211786007" lastFinishedPulling="2026-02-18 12:07:25.932024724 +0000 UTC m=+1847.659728804" observedRunningTime="2026-02-18 12:07:26.534160454 +0000 UTC m=+1848.261864564" watchObservedRunningTime="2026-02-18 12:07:26.54192547 +0000 UTC m=+1848.269629550" Feb 18 12:07:32 crc kubenswrapper[4922]: I0218 12:07:32.442240 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:32 crc kubenswrapper[4922]: I0218 12:07:32.442960 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:32 crc kubenswrapper[4922]: I0218 12:07:32.490177 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:32 crc kubenswrapper[4922]: I0218 12:07:32.614500 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:32 crc kubenswrapper[4922]: I0218 12:07:32.724038 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vklb7"] Feb 18 12:07:33 crc kubenswrapper[4922]: I0218 12:07:33.973191 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:07:33 crc kubenswrapper[4922]: E0218 12:07:33.973552 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:07:34 crc kubenswrapper[4922]: I0218 12:07:34.584836 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vklb7" podUID="ac7439c4-4267-4309-aae7-259734126f27" containerName="registry-server" containerID="cri-o://f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf" gracePeriod=2 Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.077191 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.103128 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-catalog-content\") pod \"ac7439c4-4267-4309-aae7-259734126f27\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.103217 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-utilities\") pod \"ac7439c4-4267-4309-aae7-259734126f27\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.103250 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm7gl\" (UniqueName: \"kubernetes.io/projected/ac7439c4-4267-4309-aae7-259734126f27-kube-api-access-lm7gl\") pod \"ac7439c4-4267-4309-aae7-259734126f27\" (UID: \"ac7439c4-4267-4309-aae7-259734126f27\") " Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.108447 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-utilities" (OuterVolumeSpecName: "utilities") pod "ac7439c4-4267-4309-aae7-259734126f27" (UID: "ac7439c4-4267-4309-aae7-259734126f27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.121231 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7439c4-4267-4309-aae7-259734126f27-kube-api-access-lm7gl" (OuterVolumeSpecName: "kube-api-access-lm7gl") pod "ac7439c4-4267-4309-aae7-259734126f27" (UID: "ac7439c4-4267-4309-aae7-259734126f27"). InnerVolumeSpecName "kube-api-access-lm7gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.161690 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac7439c4-4267-4309-aae7-259734126f27" (UID: "ac7439c4-4267-4309-aae7-259734126f27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.205888 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.205943 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac7439c4-4267-4309-aae7-259734126f27-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.205953 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm7gl\" (UniqueName: \"kubernetes.io/projected/ac7439c4-4267-4309-aae7-259734126f27-kube-api-access-lm7gl\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.597000 4922 generic.go:334] "Generic (PLEG): container finished" podID="ac7439c4-4267-4309-aae7-259734126f27" containerID="f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf" exitCode=0 Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.597060 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vklb7" event={"ID":"ac7439c4-4267-4309-aae7-259734126f27","Type":"ContainerDied","Data":"f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf"} Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.597091 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vklb7" event={"ID":"ac7439c4-4267-4309-aae7-259734126f27","Type":"ContainerDied","Data":"e53febc7959bdd6c792cd27a5466ba3ccfc2726e0fe755d18fad2114a8a250b6"} Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.597107 4922 scope.go:117] "RemoveContainer" containerID="f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.597112 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vklb7" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.621613 4922 scope.go:117] "RemoveContainer" containerID="1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.655396 4922 scope.go:117] "RemoveContainer" containerID="95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.662886 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vklb7"] Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.701823 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vklb7"] Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.733536 4922 scope.go:117] "RemoveContainer" containerID="f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf" Feb 18 12:07:35 crc kubenswrapper[4922]: E0218 12:07:35.757920 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf\": container with ID starting with f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf not found: ID does not exist" containerID="f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.757983 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf"} err="failed to get container status \"f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf\": rpc error: code = NotFound desc = could not find container \"f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf\": container with ID starting with f9838d0ef428602969b121ca90b6ec2759edb7614230d6ca342fd808e9027bdf not found: ID does not exist" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.758017 4922 scope.go:117] "RemoveContainer" containerID="1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec" Feb 18 12:07:35 crc kubenswrapper[4922]: E0218 12:07:35.785152 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec\": container with ID starting with 1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec not found: ID does not exist" containerID="1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.785241 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec"} err="failed to get container status \"1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec\": rpc error: code = NotFound desc = could not find container \"1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec\": container with ID starting with 1fcb5b811fa263973c9bb501d12265509e1f4dcccaa9ed6fa964c6181ac3e2ec not found: ID does not exist" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.785303 4922 scope.go:117] "RemoveContainer" containerID="95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0" Feb 18 12:07:35 crc kubenswrapper[4922]: E0218 12:07:35.788776 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0\": container with ID starting with 95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0 not found: ID does not exist" containerID="95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0" Feb 18 12:07:35 crc kubenswrapper[4922]: I0218 12:07:35.788857 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0"} err="failed to get container status \"95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0\": rpc error: code = NotFound desc = could not find container \"95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0\": container with ID starting with 95f824af57fb1a326e2795f6ef387d9d820419c7f79841467ba73d6d09a47ab0 not found: ID does not exist" Feb 18 12:07:36 crc kubenswrapper[4922]: I0218 12:07:36.037041 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p2pzf"] Feb 18 12:07:36 crc kubenswrapper[4922]: I0218 12:07:36.044954 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p2pzf"] Feb 18 12:07:36 crc kubenswrapper[4922]: I0218 12:07:36.053017 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c19a-account-create-update-24shd"] Feb 18 12:07:36 crc kubenswrapper[4922]: I0218 12:07:36.061860 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c19a-account-create-update-24shd"] Feb 18 12:07:36 crc kubenswrapper[4922]: I0218 12:07:36.983763 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157bc07b-77b8-4a29-b8e0-9a205215187b" path="/var/lib/kubelet/pods/157bc07b-77b8-4a29-b8e0-9a205215187b/volumes" Feb 18 12:07:36 crc kubenswrapper[4922]: I0218 12:07:36.984457 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac7439c4-4267-4309-aae7-259734126f27" path="/var/lib/kubelet/pods/ac7439c4-4267-4309-aae7-259734126f27/volumes" Feb 18 12:07:36 crc kubenswrapper[4922]: I0218 12:07:36.985208 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9360e33-9ae9-4b84-a898-c2c22626a565" path="/var/lib/kubelet/pods/f9360e33-9ae9-4b84-a898-c2c22626a565/volumes" Feb 18 12:07:37 crc kubenswrapper[4922]: I0218 12:07:37.032692 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-96dc-account-create-update-4px58"] Feb 18 12:07:37 crc kubenswrapper[4922]: I0218 12:07:37.046449 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-b9cbq"] Feb 18 12:07:37 crc kubenswrapper[4922]: I0218 12:07:37.055515 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ea20-account-create-update-k5t5k"] Feb 18 12:07:37 crc kubenswrapper[4922]: I0218 12:07:37.062960 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-96dc-account-create-update-4px58"] Feb 18 12:07:37 crc kubenswrapper[4922]: I0218 12:07:37.071989 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-b9cbq"] Feb 18 12:07:37 crc kubenswrapper[4922]: I0218 12:07:37.080136 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ea20-account-create-update-k5t5k"] Feb 18 12:07:37 crc kubenswrapper[4922]: I0218 12:07:37.090629 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-ddrmz"] Feb 18 12:07:37 crc kubenswrapper[4922]: I0218 12:07:37.114179 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-ddrmz"] Feb 18 12:07:38 crc kubenswrapper[4922]: I0218 12:07:38.987642 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c3abe9-3a81-44ef-babf-818b176f6437" path="/var/lib/kubelet/pods/41c3abe9-3a81-44ef-babf-818b176f6437/volumes" Feb 18 12:07:38 crc kubenswrapper[4922]: I0218 12:07:38.988618 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7513cf0a-f653-48b9-a365-9732179aaffc" path="/var/lib/kubelet/pods/7513cf0a-f653-48b9-a365-9732179aaffc/volumes" Feb 18 12:07:38 crc kubenswrapper[4922]: I0218 12:07:38.989128 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b28b3ba-c697-4cef-8e3f-e41317e3abe6" path="/var/lib/kubelet/pods/9b28b3ba-c697-4cef-8e3f-e41317e3abe6/volumes" Feb 18 12:07:38 crc kubenswrapper[4922]: I0218 12:07:38.989668 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea3a613-3571-4de4-be73-07a4db1c146e" path="/var/lib/kubelet/pods/cea3a613-3571-4de4-be73-07a4db1c146e/volumes" Feb 18 12:07:45 crc kubenswrapper[4922]: I0218 12:07:45.974344 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:07:45 crc kubenswrapper[4922]: E0218 12:07:45.975032 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:07:58 crc kubenswrapper[4922]: I0218 12:07:58.979518 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:07:58 crc kubenswrapper[4922]: E0218 12:07:58.980200 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:08:07 crc kubenswrapper[4922]: I0218 12:08:07.049722 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ws7m8"] Feb 18 12:08:07 crc kubenswrapper[4922]: I0218 12:08:07.060249 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ws7m8"] Feb 18 12:08:08 crc kubenswrapper[4922]: I0218 12:08:08.985085 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5" path="/var/lib/kubelet/pods/ca4c58a3-4dbc-4aae-90cb-0b77d123e5d5/volumes" Feb 18 12:08:10 crc kubenswrapper[4922]: I0218 12:08:10.973145 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:08:10 crc kubenswrapper[4922]: E0218 12:08:10.973917 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:08:11 crc kubenswrapper[4922]: I0218 12:08:11.973492 4922 scope.go:117] "RemoveContainer" containerID="9eae3101b2310737957f7e6d08c731592c72422d2cd0b2731a1d4e5979cf4d34" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.016067 4922 scope.go:117] "RemoveContainer" containerID="ac0ab0e9aaca817513e97dbed88ce8e6eac29d917cc8fc47fb5c8da1460429d9" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.070653 4922 scope.go:117] "RemoveContainer" containerID="64eecc2f7b37277d3eaa046c9a0ed3c760eb621a97a64a32162b927b6a47e4d1" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.105262 4922 scope.go:117] "RemoveContainer" containerID="38158e702c80c7a31868549296123efd8bf7bfe465aae6ab472768cf855c3a38" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.172340 4922 scope.go:117] "RemoveContainer" containerID="bf9c600a2ed87d6610ddd93585d70fd608560cee157862758bf35ddf2c2a4754" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.197494 4922 scope.go:117] "RemoveContainer" containerID="b2705f7646a829371102cfea131123135c7490beec56768c960d999d9c5fa2de" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.255054 4922 scope.go:117] "RemoveContainer" containerID="f4bd75adca76d727b7937c3ad606434e562eddda6632e0e9b7e47844174dbe7d" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.273383 4922 scope.go:117] "RemoveContainer" containerID="2ca4584a9bdc48778761e031dee58443f19954bb292722b1e84049c5d0d3891e" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.292213 4922 scope.go:117] "RemoveContainer" containerID="5b0a2a7731dab0112315f444a33e408175b5c28abb2f69079912cabb44e2557e" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.313415 4922 scope.go:117] "RemoveContainer" containerID="6447a128ed7c2ef808d7f125af51fae05848cf0c3a78ee0cef1bb21c2071c85c" Feb 18 12:08:12 crc kubenswrapper[4922]: I0218 12:08:12.331197 4922 scope.go:117] "RemoveContainer" containerID="20488b03c617a43634c46564fbbe667e0a9ed54d02bba1ba2ca81cbbea0dad2c" Feb 18 12:08:15 crc kubenswrapper[4922]: I0218 12:08:15.930194 4922 generic.go:334] "Generic (PLEG): container finished" podID="b2f62f96-5ba4-4d16-89d8-11ae5e941699" containerID="46c56531dc7f6fe8b2ba5d33360e8f5c404c054782958003bf246dc84e31961c" exitCode=0 Feb 18 12:08:15 crc kubenswrapper[4922]: I0218 12:08:15.930475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" event={"ID":"b2f62f96-5ba4-4d16-89d8-11ae5e941699","Type":"ContainerDied","Data":"46c56531dc7f6fe8b2ba5d33360e8f5c404c054782958003bf246dc84e31961c"} Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.334310 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.465787 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-inventory\") pod \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.467047 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-ssh-key-openstack-edpm-ipam\") pod \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.467090 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr9w4\" (UniqueName: \"kubernetes.io/projected/b2f62f96-5ba4-4d16-89d8-11ae5e941699-kube-api-access-tr9w4\") pod \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\" (UID: \"b2f62f96-5ba4-4d16-89d8-11ae5e941699\") " Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.474494 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f62f96-5ba4-4d16-89d8-11ae5e941699-kube-api-access-tr9w4" (OuterVolumeSpecName: "kube-api-access-tr9w4") pod "b2f62f96-5ba4-4d16-89d8-11ae5e941699" (UID: "b2f62f96-5ba4-4d16-89d8-11ae5e941699"). InnerVolumeSpecName "kube-api-access-tr9w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.501176 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-inventory" (OuterVolumeSpecName: "inventory") pod "b2f62f96-5ba4-4d16-89d8-11ae5e941699" (UID: "b2f62f96-5ba4-4d16-89d8-11ae5e941699"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.502914 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b2f62f96-5ba4-4d16-89d8-11ae5e941699" (UID: "b2f62f96-5ba4-4d16-89d8-11ae5e941699"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.570383 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.570431 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2f62f96-5ba4-4d16-89d8-11ae5e941699-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.570447 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr9w4\" (UniqueName: \"kubernetes.io/projected/b2f62f96-5ba4-4d16-89d8-11ae5e941699-kube-api-access-tr9w4\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.949163 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" event={"ID":"b2f62f96-5ba4-4d16-89d8-11ae5e941699","Type":"ContainerDied","Data":"13c41ae592beddfaa7d7f1100f4757a6653e55fee294002ac2782f785aade831"} Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.949527 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c41ae592beddfaa7d7f1100f4757a6653e55fee294002ac2782f785aade831" Feb 18 12:08:17 crc kubenswrapper[4922]: I0218 12:08:17.949228 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.047352 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc"] Feb 18 12:08:18 crc kubenswrapper[4922]: E0218 12:08:18.047735 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7439c4-4267-4309-aae7-259734126f27" containerName="extract-content" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.047751 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7439c4-4267-4309-aae7-259734126f27" containerName="extract-content" Feb 18 12:08:18 crc kubenswrapper[4922]: E0218 12:08:18.047768 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7439c4-4267-4309-aae7-259734126f27" containerName="extract-utilities" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.047777 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7439c4-4267-4309-aae7-259734126f27" containerName="extract-utilities" Feb 18 12:08:18 crc kubenswrapper[4922]: E0218 12:08:18.047790 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7439c4-4267-4309-aae7-259734126f27" containerName="registry-server" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.047800 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7439c4-4267-4309-aae7-259734126f27" containerName="registry-server" Feb 18 12:08:18 crc kubenswrapper[4922]: E0218 12:08:18.047828 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f62f96-5ba4-4d16-89d8-11ae5e941699" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.047834 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f62f96-5ba4-4d16-89d8-11ae5e941699" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.048016 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f62f96-5ba4-4d16-89d8-11ae5e941699" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.048030 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7439c4-4267-4309-aae7-259734126f27" containerName="registry-server" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.062781 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.115660 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.116060 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.116333 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.116605 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.133558 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc"] Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.222968 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.223201 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cc6l\" (UniqueName: \"kubernetes.io/projected/353e7c86-6842-40e4-ac3d-e2032eef15c5-kube-api-access-2cc6l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.223445 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.325189 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cc6l\" (UniqueName: \"kubernetes.io/projected/353e7c86-6842-40e4-ac3d-e2032eef15c5-kube-api-access-2cc6l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.325316 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.325446 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.335698 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.343542 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.344350 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cc6l\" (UniqueName: \"kubernetes.io/projected/353e7c86-6842-40e4-ac3d-e2032eef15c5-kube-api-access-2cc6l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hprwc\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.434612 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.772202 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc"] Feb 18 12:08:18 crc kubenswrapper[4922]: I0218 12:08:18.962739 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" event={"ID":"353e7c86-6842-40e4-ac3d-e2032eef15c5","Type":"ContainerStarted","Data":"f7144af24da3674fe3bfa5c89017af24270cc0d2432907f93d8a751e963ddc7d"} Feb 18 12:08:19 crc kubenswrapper[4922]: I0218 12:08:19.976431 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" event={"ID":"353e7c86-6842-40e4-ac3d-e2032eef15c5","Type":"ContainerStarted","Data":"024c9200cc944fa7d0d57d8c7ae8a256168a2253689f7368fddc0bc9307314f5"} Feb 18 12:08:20 crc kubenswrapper[4922]: I0218 12:08:20.008567 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" podStartSLOduration=1.268592464 podStartE2EDuration="2.008546983s" podCreationTimestamp="2026-02-18 12:08:18 +0000 UTC" firstStartedPulling="2026-02-18 12:08:18.772615203 +0000 UTC m=+1900.500319283" lastFinishedPulling="2026-02-18 12:08:19.512569722 +0000 UTC m=+1901.240273802" observedRunningTime="2026-02-18 12:08:19.994680803 +0000 UTC m=+1901.722384893" watchObservedRunningTime="2026-02-18 12:08:20.008546983 +0000 UTC m=+1901.736251093" Feb 18 12:08:23 crc kubenswrapper[4922]: I0218 12:08:23.973895 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:08:23 crc kubenswrapper[4922]: E0218 12:08:23.974626 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:08:25 crc kubenswrapper[4922]: I0218 12:08:25.021527 4922 generic.go:334] "Generic (PLEG): container finished" podID="353e7c86-6842-40e4-ac3d-e2032eef15c5" containerID="024c9200cc944fa7d0d57d8c7ae8a256168a2253689f7368fddc0bc9307314f5" exitCode=0 Feb 18 12:08:25 crc kubenswrapper[4922]: I0218 12:08:25.021583 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" event={"ID":"353e7c86-6842-40e4-ac3d-e2032eef15c5","Type":"ContainerDied","Data":"024c9200cc944fa7d0d57d8c7ae8a256168a2253689f7368fddc0bc9307314f5"} Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.049787 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zqpbh"] Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.063893 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gts9l"] Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.074520 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gts9l"] Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.082788 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zqpbh"] Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.470131 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.527498 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cc6l\" (UniqueName: \"kubernetes.io/projected/353e7c86-6842-40e4-ac3d-e2032eef15c5-kube-api-access-2cc6l\") pod \"353e7c86-6842-40e4-ac3d-e2032eef15c5\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.534589 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353e7c86-6842-40e4-ac3d-e2032eef15c5-kube-api-access-2cc6l" (OuterVolumeSpecName: "kube-api-access-2cc6l") pod "353e7c86-6842-40e4-ac3d-e2032eef15c5" (UID: "353e7c86-6842-40e4-ac3d-e2032eef15c5"). InnerVolumeSpecName "kube-api-access-2cc6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.629180 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-inventory\") pod \"353e7c86-6842-40e4-ac3d-e2032eef15c5\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.629320 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-ssh-key-openstack-edpm-ipam\") pod \"353e7c86-6842-40e4-ac3d-e2032eef15c5\" (UID: \"353e7c86-6842-40e4-ac3d-e2032eef15c5\") " Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.630126 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cc6l\" (UniqueName: \"kubernetes.io/projected/353e7c86-6842-40e4-ac3d-e2032eef15c5-kube-api-access-2cc6l\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.655201 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "353e7c86-6842-40e4-ac3d-e2032eef15c5" (UID: "353e7c86-6842-40e4-ac3d-e2032eef15c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.655233 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-inventory" (OuterVolumeSpecName: "inventory") pod "353e7c86-6842-40e4-ac3d-e2032eef15c5" (UID: "353e7c86-6842-40e4-ac3d-e2032eef15c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.732329 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:26 crc kubenswrapper[4922]: I0218 12:08:26.732378 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/353e7c86-6842-40e4-ac3d-e2032eef15c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.003082 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fc0adb-b8ae-4fd6-88eb-3b6357173103" path="/var/lib/kubelet/pods/95fc0adb-b8ae-4fd6-88eb-3b6357173103/volumes" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.004530 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aacb8ffe-ff30-4292-b253-1e12d07f499b" path="/var/lib/kubelet/pods/aacb8ffe-ff30-4292-b253-1e12d07f499b/volumes" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.044025 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" event={"ID":"353e7c86-6842-40e4-ac3d-e2032eef15c5","Type":"ContainerDied","Data":"f7144af24da3674fe3bfa5c89017af24270cc0d2432907f93d8a751e963ddc7d"} Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.044076 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7144af24da3674fe3bfa5c89017af24270cc0d2432907f93d8a751e963ddc7d" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.044095 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hprwc" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.205160 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh"] Feb 18 12:08:27 crc kubenswrapper[4922]: E0218 12:08:27.205743 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353e7c86-6842-40e4-ac3d-e2032eef15c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.205762 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="353e7c86-6842-40e4-ac3d-e2032eef15c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.206094 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="353e7c86-6842-40e4-ac3d-e2032eef15c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.207031 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.209467 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.209713 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.209852 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.210967 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.215277 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh"] Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.242577 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.242622 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wq7\" (UniqueName: \"kubernetes.io/projected/c107695a-fdf7-48c6-b165-5e4dd2427148-kube-api-access-64wq7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.242706 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.344173 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.344246 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64wq7\" (UniqueName: \"kubernetes.io/projected/c107695a-fdf7-48c6-b165-5e4dd2427148-kube-api-access-64wq7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.344324 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.347837 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.348348 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.365344 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64wq7\" (UniqueName: \"kubernetes.io/projected/c107695a-fdf7-48c6-b165-5e4dd2427148-kube-api-access-64wq7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sz5wh\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:27 crc kubenswrapper[4922]: I0218 12:08:27.522057 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:08:28 crc kubenswrapper[4922]: I0218 12:08:28.032283 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh"] Feb 18 12:08:28 crc kubenswrapper[4922]: I0218 12:08:28.053949 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" event={"ID":"c107695a-fdf7-48c6-b165-5e4dd2427148","Type":"ContainerStarted","Data":"6aca127bc3993d0dae302a205ad3f1b63aa98027f67d0e8fc71b42da03f6581b"} Feb 18 12:08:29 crc kubenswrapper[4922]: I0218 12:08:29.065047 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" event={"ID":"c107695a-fdf7-48c6-b165-5e4dd2427148","Type":"ContainerStarted","Data":"8ea96b3623a7dca277e92ddcee5cb2ddcc8b300dc1e184235406bf4e928c6ca2"} Feb 18 12:08:29 crc kubenswrapper[4922]: I0218 12:08:29.088990 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" podStartSLOduration=1.441219157 podStartE2EDuration="2.088965519s" podCreationTimestamp="2026-02-18 12:08:27 +0000 UTC" firstStartedPulling="2026-02-18 12:08:28.03729307 +0000 UTC m=+1909.764997150" lastFinishedPulling="2026-02-18 12:08:28.685039432 +0000 UTC m=+1910.412743512" observedRunningTime="2026-02-18 12:08:29.085333207 +0000 UTC m=+1910.813037287" watchObservedRunningTime="2026-02-18 12:08:29.088965519 +0000 UTC m=+1910.816669599" Feb 18 12:08:35 crc kubenswrapper[4922]: I0218 12:08:35.974691 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:08:35 crc kubenswrapper[4922]: E0218 12:08:35.975512 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:08:49 crc kubenswrapper[4922]: I0218 12:08:49.973243 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:08:49 crc kubenswrapper[4922]: E0218 12:08:49.974102 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:09:03 crc kubenswrapper[4922]: I0218 12:09:03.973484 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:09:03 crc kubenswrapper[4922]: E0218 12:09:03.974231 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:09:04 crc kubenswrapper[4922]: I0218 12:09:04.368081 4922 generic.go:334] "Generic (PLEG): container finished" podID="c107695a-fdf7-48c6-b165-5e4dd2427148" containerID="8ea96b3623a7dca277e92ddcee5cb2ddcc8b300dc1e184235406bf4e928c6ca2" exitCode=0 Feb 18 12:09:04 crc kubenswrapper[4922]: I0218 12:09:04.368138 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" event={"ID":"c107695a-fdf7-48c6-b165-5e4dd2427148","Type":"ContainerDied","Data":"8ea96b3623a7dca277e92ddcee5cb2ddcc8b300dc1e184235406bf4e928c6ca2"} Feb 18 12:09:05 crc kubenswrapper[4922]: I0218 12:09:05.859745 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.007462 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64wq7\" (UniqueName: \"kubernetes.io/projected/c107695a-fdf7-48c6-b165-5e4dd2427148-kube-api-access-64wq7\") pod \"c107695a-fdf7-48c6-b165-5e4dd2427148\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.007641 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-ssh-key-openstack-edpm-ipam\") pod \"c107695a-fdf7-48c6-b165-5e4dd2427148\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.007716 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-inventory\") pod \"c107695a-fdf7-48c6-b165-5e4dd2427148\" (UID: \"c107695a-fdf7-48c6-b165-5e4dd2427148\") " Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.013243 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c107695a-fdf7-48c6-b165-5e4dd2427148-kube-api-access-64wq7" (OuterVolumeSpecName: "kube-api-access-64wq7") pod "c107695a-fdf7-48c6-b165-5e4dd2427148" (UID: "c107695a-fdf7-48c6-b165-5e4dd2427148"). InnerVolumeSpecName "kube-api-access-64wq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.035313 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c107695a-fdf7-48c6-b165-5e4dd2427148" (UID: "c107695a-fdf7-48c6-b165-5e4dd2427148"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.040061 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-inventory" (OuterVolumeSpecName: "inventory") pod "c107695a-fdf7-48c6-b165-5e4dd2427148" (UID: "c107695a-fdf7-48c6-b165-5e4dd2427148"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.111998 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64wq7\" (UniqueName: \"kubernetes.io/projected/c107695a-fdf7-48c6-b165-5e4dd2427148-kube-api-access-64wq7\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.112499 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.112628 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c107695a-fdf7-48c6-b165-5e4dd2427148-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.385866 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" event={"ID":"c107695a-fdf7-48c6-b165-5e4dd2427148","Type":"ContainerDied","Data":"6aca127bc3993d0dae302a205ad3f1b63aa98027f67d0e8fc71b42da03f6581b"} Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.386169 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aca127bc3993d0dae302a205ad3f1b63aa98027f67d0e8fc71b42da03f6581b" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.385925 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sz5wh" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.508147 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl"] Feb 18 12:09:06 crc kubenswrapper[4922]: E0218 12:09:06.508554 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c107695a-fdf7-48c6-b165-5e4dd2427148" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.508572 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c107695a-fdf7-48c6-b165-5e4dd2427148" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.508775 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c107695a-fdf7-48c6-b165-5e4dd2427148" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.509412 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.512529 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.512752 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.512929 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.513087 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.523861 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl"] Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.526326 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.526384 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.526441 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpnx\" (UniqueName: \"kubernetes.io/projected/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-kube-api-access-8xpnx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.628107 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.628319 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.628384 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xpnx\" (UniqueName: \"kubernetes.io/projected/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-kube-api-access-8xpnx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.633155 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.637905 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.644232 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xpnx\" (UniqueName: \"kubernetes.io/projected/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-kube-api-access-8xpnx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:06 crc kubenswrapper[4922]: I0218 12:09:06.850464 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:07 crc kubenswrapper[4922]: I0218 12:09:07.391259 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl"] Feb 18 12:09:07 crc kubenswrapper[4922]: I0218 12:09:07.395514 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" event={"ID":"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d","Type":"ContainerStarted","Data":"b7b1ebedfebb46a47d356ce9223386e680a037fd14252fe17313a77c3838483a"} Feb 18 12:09:08 crc kubenswrapper[4922]: I0218 12:09:08.405243 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" event={"ID":"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d","Type":"ContainerStarted","Data":"a8d14529d596b2d3019a7c74078324669b0e3b7ca1a5541f03a98ba8120df860"} Feb 18 12:09:08 crc kubenswrapper[4922]: I0218 12:09:08.432451 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" podStartSLOduration=2.023280997 podStartE2EDuration="2.432427736s" podCreationTimestamp="2026-02-18 12:09:06 +0000 UTC" firstStartedPulling="2026-02-18 12:09:07.374303474 +0000 UTC m=+1949.102007554" lastFinishedPulling="2026-02-18 12:09:07.783450213 +0000 UTC m=+1949.511154293" observedRunningTime="2026-02-18 12:09:08.423301585 +0000 UTC m=+1950.151005665" watchObservedRunningTime="2026-02-18 12:09:08.432427736 +0000 UTC m=+1950.160131826" Feb 18 12:09:12 crc kubenswrapper[4922]: I0218 12:09:12.503323 4922 scope.go:117] "RemoveContainer" containerID="3dd5bc6bbbea448a7d9229f94c3182f66fafad86532a2426f6a51eaa5b649205" Feb 18 12:09:12 crc kubenswrapper[4922]: I0218 12:09:12.560629 4922 scope.go:117] "RemoveContainer" containerID="ee60fdb1d51bfbe6bd8d0e50a4e4f1a8691171bcc2ba0c807ba59a9d88e2dc0d" Feb 18 12:09:14 crc kubenswrapper[4922]: I0218 12:09:14.038154 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqtvp"] Feb 18 12:09:14 crc kubenswrapper[4922]: I0218 12:09:14.046171 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqtvp"] Feb 18 12:09:14 crc kubenswrapper[4922]: I0218 12:09:14.983697 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3440f4a-2f1d-4d69-aafe-ec2eb86183cc" path="/var/lib/kubelet/pods/b3440f4a-2f1d-4d69-aafe-ec2eb86183cc/volumes" Feb 18 12:09:15 crc kubenswrapper[4922]: I0218 12:09:15.974903 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:09:15 crc kubenswrapper[4922]: E0218 12:09:15.975794 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:09:27 crc kubenswrapper[4922]: I0218 12:09:27.973494 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:09:27 crc kubenswrapper[4922]: E0218 12:09:27.974666 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:09:41 crc kubenswrapper[4922]: I0218 12:09:41.974613 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:09:42 crc kubenswrapper[4922]: I0218 12:09:42.726010 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"f96eabfecb9bc217d8636beb3db027fe21b0aa914ece8706e84fc03338c589ff"} Feb 18 12:09:51 crc kubenswrapper[4922]: I0218 12:09:51.823122 4922 generic.go:334] "Generic (PLEG): container finished" podID="ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d" containerID="a8d14529d596b2d3019a7c74078324669b0e3b7ca1a5541f03a98ba8120df860" exitCode=0 Feb 18 12:09:51 crc kubenswrapper[4922]: I0218 12:09:51.823224 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" event={"ID":"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d","Type":"ContainerDied","Data":"a8d14529d596b2d3019a7c74078324669b0e3b7ca1a5541f03a98ba8120df860"} Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.288610 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.344984 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-ssh-key-openstack-edpm-ipam\") pod \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.345038 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xpnx\" (UniqueName: \"kubernetes.io/projected/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-kube-api-access-8xpnx\") pod \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.345337 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-inventory\") pod \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\" (UID: \"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d\") " Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.352825 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-kube-api-access-8xpnx" (OuterVolumeSpecName: "kube-api-access-8xpnx") pod "ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d" (UID: "ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d"). InnerVolumeSpecName "kube-api-access-8xpnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.381474 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-inventory" (OuterVolumeSpecName: "inventory") pod "ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d" (UID: "ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.384483 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d" (UID: "ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.447750 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.447800 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.447815 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xpnx\" (UniqueName: \"kubernetes.io/projected/ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d-kube-api-access-8xpnx\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.843777 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" event={"ID":"ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d","Type":"ContainerDied","Data":"b7b1ebedfebb46a47d356ce9223386e680a037fd14252fe17313a77c3838483a"} Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.844121 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7b1ebedfebb46a47d356ce9223386e680a037fd14252fe17313a77c3838483a" Feb 18 12:09:53 crc kubenswrapper[4922]: I0218 12:09:53.843934 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.024842 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cz4px"] Feb 18 12:09:54 crc kubenswrapper[4922]: E0218 12:09:54.025307 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.025328 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.025546 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.026220 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.029947 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.030640 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.031831 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.033508 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.037116 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cz4px"] Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.059972 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.061345 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9gz6\" (UniqueName: \"kubernetes.io/projected/c2fa843a-470e-441c-93c9-8c412459933b-kube-api-access-m9gz6\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.061506 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.163796 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9gz6\" (UniqueName: \"kubernetes.io/projected/c2fa843a-470e-441c-93c9-8c412459933b-kube-api-access-m9gz6\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.163913 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.164727 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.167932 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.173960 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.184562 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9gz6\" (UniqueName: \"kubernetes.io/projected/c2fa843a-470e-441c-93c9-8c412459933b-kube-api-access-m9gz6\") pod \"ssh-known-hosts-edpm-deployment-cz4px\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.347307 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:09:54 crc kubenswrapper[4922]: I0218 12:09:54.854254 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cz4px"] Feb 18 12:09:55 crc kubenswrapper[4922]: I0218 12:09:55.873007 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" event={"ID":"c2fa843a-470e-441c-93c9-8c412459933b","Type":"ContainerStarted","Data":"2155458a0d41d2a078dfa477e75d3f295a29102fe0d4b31385906a721ac9fc69"} Feb 18 12:09:55 crc kubenswrapper[4922]: I0218 12:09:55.873250 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" event={"ID":"c2fa843a-470e-441c-93c9-8c412459933b","Type":"ContainerStarted","Data":"cb56bd5543e36cd0c5573b96d86a626b879a0205ac0dce5406cef7788990a831"} Feb 18 12:09:55 crc kubenswrapper[4922]: I0218 12:09:55.898040 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" podStartSLOduration=1.525958537 podStartE2EDuration="1.89802061s" podCreationTimestamp="2026-02-18 12:09:54 +0000 UTC" firstStartedPulling="2026-02-18 12:09:54.867443013 +0000 UTC m=+1996.595147093" lastFinishedPulling="2026-02-18 12:09:55.239505086 +0000 UTC m=+1996.967209166" observedRunningTime="2026-02-18 12:09:55.888837708 +0000 UTC m=+1997.616541798" watchObservedRunningTime="2026-02-18 12:09:55.89802061 +0000 UTC m=+1997.625724690" Feb 18 12:10:01 crc kubenswrapper[4922]: I0218 12:10:01.927701 4922 generic.go:334] "Generic (PLEG): container finished" podID="c2fa843a-470e-441c-93c9-8c412459933b" containerID="2155458a0d41d2a078dfa477e75d3f295a29102fe0d4b31385906a721ac9fc69" exitCode=0 Feb 18 12:10:01 crc kubenswrapper[4922]: I0218 12:10:01.927746 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" event={"ID":"c2fa843a-470e-441c-93c9-8c412459933b","Type":"ContainerDied","Data":"2155458a0d41d2a078dfa477e75d3f295a29102fe0d4b31385906a721ac9fc69"} Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.347749 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.548101 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-ssh-key-openstack-edpm-ipam\") pod \"c2fa843a-470e-441c-93c9-8c412459933b\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.548300 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9gz6\" (UniqueName: \"kubernetes.io/projected/c2fa843a-470e-441c-93c9-8c412459933b-kube-api-access-m9gz6\") pod \"c2fa843a-470e-441c-93c9-8c412459933b\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.548410 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-inventory-0\") pod \"c2fa843a-470e-441c-93c9-8c412459933b\" (UID: \"c2fa843a-470e-441c-93c9-8c412459933b\") " Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.557734 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2fa843a-470e-441c-93c9-8c412459933b-kube-api-access-m9gz6" (OuterVolumeSpecName: "kube-api-access-m9gz6") pod "c2fa843a-470e-441c-93c9-8c412459933b" (UID: "c2fa843a-470e-441c-93c9-8c412459933b"). InnerVolumeSpecName "kube-api-access-m9gz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.931556 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9gz6\" (UniqueName: \"kubernetes.io/projected/c2fa843a-470e-441c-93c9-8c412459933b-kube-api-access-m9gz6\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.931974 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c2fa843a-470e-441c-93c9-8c412459933b" (UID: "c2fa843a-470e-441c-93c9-8c412459933b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.939091 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c2fa843a-470e-441c-93c9-8c412459933b" (UID: "c2fa843a-470e-441c-93c9-8c412459933b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.951099 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" event={"ID":"c2fa843a-470e-441c-93c9-8c412459933b","Type":"ContainerDied","Data":"cb56bd5543e36cd0c5573b96d86a626b879a0205ac0dce5406cef7788990a831"} Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.951155 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb56bd5543e36cd0c5573b96d86a626b879a0205ac0dce5406cef7788990a831" Feb 18 12:10:03 crc kubenswrapper[4922]: I0218 12:10:03.951202 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cz4px" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.034412 4922 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.034459 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2fa843a-470e-441c-93c9-8c412459933b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.110444 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25"] Feb 18 12:10:04 crc kubenswrapper[4922]: E0218 12:10:04.110980 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fa843a-470e-441c-93c9-8c412459933b" containerName="ssh-known-hosts-edpm-deployment" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.111006 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fa843a-470e-441c-93c9-8c412459933b" containerName="ssh-known-hosts-edpm-deployment" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.111253 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2fa843a-470e-441c-93c9-8c412459933b" containerName="ssh-known-hosts-edpm-deployment" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.112090 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.114678 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.114737 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.115136 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.115254 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.126719 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25"] Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.245497 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.245835 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.245859 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dhd\" (UniqueName: \"kubernetes.io/projected/227ab888-976c-4ce1-beb8-abbe305c6d79-kube-api-access-92dhd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.347564 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.347680 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.347704 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92dhd\" (UniqueName: \"kubernetes.io/projected/227ab888-976c-4ce1-beb8-abbe305c6d79-kube-api-access-92dhd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.353201 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.353215 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.367424 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92dhd\" (UniqueName: \"kubernetes.io/projected/227ab888-976c-4ce1-beb8-abbe305c6d79-kube-api-access-92dhd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b5h25\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.451603 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.989080 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25"] Feb 18 12:10:04 crc kubenswrapper[4922]: I0218 12:10:04.997945 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:10:05 crc kubenswrapper[4922]: I0218 12:10:05.971475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" event={"ID":"227ab888-976c-4ce1-beb8-abbe305c6d79","Type":"ContainerStarted","Data":"6a338d676d67c02553a3ed22517161f630c3e29222fabb352e3c16eeab57926b"} Feb 18 12:10:05 crc kubenswrapper[4922]: I0218 12:10:05.972181 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" event={"ID":"227ab888-976c-4ce1-beb8-abbe305c6d79","Type":"ContainerStarted","Data":"e45b93f4844ffc72c1bc4c029021b6cc531116c46cb3c0bd6ccfb945234158a8"} Feb 18 12:10:06 crc kubenswrapper[4922]: I0218 12:10:06.003890 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" podStartSLOduration=1.463399148 podStartE2EDuration="2.003866152s" podCreationTimestamp="2026-02-18 12:10:04 +0000 UTC" firstStartedPulling="2026-02-18 12:10:04.997651881 +0000 UTC m=+2006.725355971" lastFinishedPulling="2026-02-18 12:10:05.538118885 +0000 UTC m=+2007.265822975" observedRunningTime="2026-02-18 12:10:05.989419878 +0000 UTC m=+2007.717123958" watchObservedRunningTime="2026-02-18 12:10:06.003866152 +0000 UTC m=+2007.731570232" Feb 18 12:10:12 crc kubenswrapper[4922]: I0218 12:10:12.651457 4922 scope.go:117] "RemoveContainer" containerID="fe9e0b8c506169c8ff2962eae62f59ba2039b683aba9a221a651171f7491288f" Feb 18 12:10:14 crc kubenswrapper[4922]: I0218 12:10:14.061304 4922 generic.go:334] "Generic (PLEG): container finished" podID="227ab888-976c-4ce1-beb8-abbe305c6d79" containerID="6a338d676d67c02553a3ed22517161f630c3e29222fabb352e3c16eeab57926b" exitCode=0 Feb 18 12:10:14 crc kubenswrapper[4922]: I0218 12:10:14.061415 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" event={"ID":"227ab888-976c-4ce1-beb8-abbe305c6d79","Type":"ContainerDied","Data":"6a338d676d67c02553a3ed22517161f630c3e29222fabb352e3c16eeab57926b"} Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.510592 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.592080 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-inventory\") pod \"227ab888-976c-4ce1-beb8-abbe305c6d79\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.592156 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92dhd\" (UniqueName: \"kubernetes.io/projected/227ab888-976c-4ce1-beb8-abbe305c6d79-kube-api-access-92dhd\") pod \"227ab888-976c-4ce1-beb8-abbe305c6d79\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.592308 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-ssh-key-openstack-edpm-ipam\") pod \"227ab888-976c-4ce1-beb8-abbe305c6d79\" (UID: \"227ab888-976c-4ce1-beb8-abbe305c6d79\") " Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.597614 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227ab888-976c-4ce1-beb8-abbe305c6d79-kube-api-access-92dhd" (OuterVolumeSpecName: "kube-api-access-92dhd") pod "227ab888-976c-4ce1-beb8-abbe305c6d79" (UID: "227ab888-976c-4ce1-beb8-abbe305c6d79"). InnerVolumeSpecName "kube-api-access-92dhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.618410 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "227ab888-976c-4ce1-beb8-abbe305c6d79" (UID: "227ab888-976c-4ce1-beb8-abbe305c6d79"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.626645 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-inventory" (OuterVolumeSpecName: "inventory") pod "227ab888-976c-4ce1-beb8-abbe305c6d79" (UID: "227ab888-976c-4ce1-beb8-abbe305c6d79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.694437 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.694464 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/227ab888-976c-4ce1-beb8-abbe305c6d79-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:15 crc kubenswrapper[4922]: I0218 12:10:15.694474 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92dhd\" (UniqueName: \"kubernetes.io/projected/227ab888-976c-4ce1-beb8-abbe305c6d79-kube-api-access-92dhd\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.081150 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" event={"ID":"227ab888-976c-4ce1-beb8-abbe305c6d79","Type":"ContainerDied","Data":"e45b93f4844ffc72c1bc4c029021b6cc531116c46cb3c0bd6ccfb945234158a8"} Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.081199 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45b93f4844ffc72c1bc4c029021b6cc531116c46cb3c0bd6ccfb945234158a8" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.081201 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b5h25" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.180982 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z"] Feb 18 12:10:16 crc kubenswrapper[4922]: E0218 12:10:16.181510 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227ab888-976c-4ce1-beb8-abbe305c6d79" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.181533 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="227ab888-976c-4ce1-beb8-abbe305c6d79" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.181759 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="227ab888-976c-4ce1-beb8-abbe305c6d79" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.182472 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.185365 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.185611 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.185795 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.185987 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.204470 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z"] Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.304228 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.305380 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.305774 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g5h4\" (UniqueName: \"kubernetes.io/projected/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-kube-api-access-4g5h4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.407671 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g5h4\" (UniqueName: \"kubernetes.io/projected/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-kube-api-access-4g5h4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.408077 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.408120 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.415451 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.423014 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.429933 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g5h4\" (UniqueName: \"kubernetes.io/projected/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-kube-api-access-4g5h4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:16 crc kubenswrapper[4922]: I0218 12:10:16.502917 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:17 crc kubenswrapper[4922]: I0218 12:10:17.058653 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z"] Feb 18 12:10:17 crc kubenswrapper[4922]: I0218 12:10:17.097385 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" event={"ID":"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9","Type":"ContainerStarted","Data":"38c17c5d036cba7c5c3872091f42de94c68d4f5d546f513297a4d61e02240874"} Feb 18 12:10:18 crc kubenswrapper[4922]: I0218 12:10:18.106135 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" event={"ID":"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9","Type":"ContainerStarted","Data":"daaa6c1a89adf85e202516fca8131d072d601b2824c5cbed83b90661a7b68d6f"} Feb 18 12:10:18 crc kubenswrapper[4922]: I0218 12:10:18.127760 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" podStartSLOduration=1.741161312 podStartE2EDuration="2.127740658s" podCreationTimestamp="2026-02-18 12:10:16 +0000 UTC" firstStartedPulling="2026-02-18 12:10:17.072273038 +0000 UTC m=+2018.799977128" lastFinishedPulling="2026-02-18 12:10:17.458852394 +0000 UTC m=+2019.186556474" observedRunningTime="2026-02-18 12:10:18.126137788 +0000 UTC m=+2019.853841868" watchObservedRunningTime="2026-02-18 12:10:18.127740658 +0000 UTC m=+2019.855444738" Feb 18 12:10:27 crc kubenswrapper[4922]: I0218 12:10:27.195505 4922 generic.go:334] "Generic (PLEG): container finished" podID="fac1ed4a-2fa4-4220-80fb-f54e3a357fb9" containerID="daaa6c1a89adf85e202516fca8131d072d601b2824c5cbed83b90661a7b68d6f" exitCode=0 Feb 18 12:10:27 crc kubenswrapper[4922]: I0218 12:10:27.195605 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" event={"ID":"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9","Type":"ContainerDied","Data":"daaa6c1a89adf85e202516fca8131d072d601b2824c5cbed83b90661a7b68d6f"} Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.622154 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.747004 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g5h4\" (UniqueName: \"kubernetes.io/projected/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-kube-api-access-4g5h4\") pod \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.747057 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-ssh-key-openstack-edpm-ipam\") pod \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.747288 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-inventory\") pod \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\" (UID: \"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9\") " Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.753921 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-kube-api-access-4g5h4" (OuterVolumeSpecName: "kube-api-access-4g5h4") pod "fac1ed4a-2fa4-4220-80fb-f54e3a357fb9" (UID: "fac1ed4a-2fa4-4220-80fb-f54e3a357fb9"). InnerVolumeSpecName "kube-api-access-4g5h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.774793 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-inventory" (OuterVolumeSpecName: "inventory") pod "fac1ed4a-2fa4-4220-80fb-f54e3a357fb9" (UID: "fac1ed4a-2fa4-4220-80fb-f54e3a357fb9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.777788 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fac1ed4a-2fa4-4220-80fb-f54e3a357fb9" (UID: "fac1ed4a-2fa4-4220-80fb-f54e3a357fb9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.849749 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.849800 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g5h4\" (UniqueName: \"kubernetes.io/projected/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-kube-api-access-4g5h4\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:28 crc kubenswrapper[4922]: I0218 12:10:28.849818 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fac1ed4a-2fa4-4220-80fb-f54e3a357fb9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.212930 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" event={"ID":"fac1ed4a-2fa4-4220-80fb-f54e3a357fb9","Type":"ContainerDied","Data":"38c17c5d036cba7c5c3872091f42de94c68d4f5d546f513297a4d61e02240874"} Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.212967 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c17c5d036cba7c5c3872091f42de94c68d4f5d546f513297a4d61e02240874" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.212971 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.311959 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq"] Feb 18 12:10:29 crc kubenswrapper[4922]: E0218 12:10:29.314123 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac1ed4a-2fa4-4220-80fb-f54e3a357fb9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.314154 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac1ed4a-2fa4-4220-80fb-f54e3a357fb9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.314505 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac1ed4a-2fa4-4220-80fb-f54e3a357fb9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.315412 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.321214 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.321406 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.321498 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.321941 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.322035 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.322129 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.322182 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.322231 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.324822 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq"] Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.460337 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.460502 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.460543 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.460691 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6mcp\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-kube-api-access-k6mcp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.460765 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461007 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461057 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461086 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461120 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461163 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461207 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461307 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461338 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.461395 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.562833 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.562878 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.562918 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6mcp\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-kube-api-access-k6mcp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.562953 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.562981 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563018 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563042 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563066 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563092 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563108 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563163 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563183 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563204 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.563227 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.567260 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.567551 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.568011 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.571681 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.571847 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.572162 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.573552 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.573964 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.574687 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.575142 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.575810 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.577438 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.583756 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.583787 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6mcp\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-kube-api-access-k6mcp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bxngq\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:29 crc kubenswrapper[4922]: I0218 12:10:29.635874 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:10:30 crc kubenswrapper[4922]: I0218 12:10:30.237605 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq"] Feb 18 12:10:31 crc kubenswrapper[4922]: I0218 12:10:31.231485 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" event={"ID":"98bc83e7-66dd-4133-82cd-d4301c233f9d","Type":"ContainerStarted","Data":"4938616f4f8b9f59ee92ec69c7db137c366133a3da099ee49fe218fc024cbfe1"} Feb 18 12:10:31 crc kubenswrapper[4922]: I0218 12:10:31.232997 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" event={"ID":"98bc83e7-66dd-4133-82cd-d4301c233f9d","Type":"ContainerStarted","Data":"daacfe7efb7e08b3efeba8d32115cca5950451e2edadcd24196c068b03499b35"} Feb 18 12:10:31 crc kubenswrapper[4922]: I0218 12:10:31.256098 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" podStartSLOduration=1.6755501179999999 podStartE2EDuration="2.256076507s" podCreationTimestamp="2026-02-18 12:10:29 +0000 UTC" firstStartedPulling="2026-02-18 12:10:30.247074447 +0000 UTC m=+2031.974778527" lastFinishedPulling="2026-02-18 12:10:30.827600826 +0000 UTC m=+2032.555304916" observedRunningTime="2026-02-18 12:10:31.254153388 +0000 UTC m=+2032.981857468" watchObservedRunningTime="2026-02-18 12:10:31.256076507 +0000 UTC m=+2032.983780587" Feb 18 12:11:06 crc kubenswrapper[4922]: I0218 12:11:06.545890 4922 generic.go:334] "Generic (PLEG): container finished" podID="98bc83e7-66dd-4133-82cd-d4301c233f9d" containerID="4938616f4f8b9f59ee92ec69c7db137c366133a3da099ee49fe218fc024cbfe1" exitCode=0 Feb 18 12:11:06 crc kubenswrapper[4922]: I0218 12:11:06.545951 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" event={"ID":"98bc83e7-66dd-4133-82cd-d4301c233f9d","Type":"ContainerDied","Data":"4938616f4f8b9f59ee92ec69c7db137c366133a3da099ee49fe218fc024cbfe1"} Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.043094 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.186956 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-inventory\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187030 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6mcp\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-kube-api-access-k6mcp\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187138 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-libvirt-combined-ca-bundle\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187183 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187252 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187329 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-telemetry-combined-ca-bundle\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187352 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-nova-combined-ca-bundle\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187396 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-bootstrap-combined-ca-bundle\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187434 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187457 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-repo-setup-combined-ca-bundle\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187477 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ssh-key-openstack-edpm-ipam\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187500 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187522 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ovn-combined-ca-bundle\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.187551 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-neutron-metadata-combined-ca-bundle\") pod \"98bc83e7-66dd-4133-82cd-d4301c233f9d\" (UID: \"98bc83e7-66dd-4133-82cd-d4301c233f9d\") " Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.194859 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.195436 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.195954 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.196022 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.196229 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.196981 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.197848 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.197951 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.198290 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.207463 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.207586 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-kube-api-access-k6mcp" (OuterVolumeSpecName: "kube-api-access-k6mcp") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "kube-api-access-k6mcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.207595 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.220067 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-inventory" (OuterVolumeSpecName: "inventory") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.234988 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "98bc83e7-66dd-4133-82cd-d4301c233f9d" (UID: "98bc83e7-66dd-4133-82cd-d4301c233f9d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292072 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6mcp\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-kube-api-access-k6mcp\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292118 4922 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292133 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292147 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292160 4922 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292171 4922 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292182 4922 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292195 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292206 4922 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292233 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292248 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/98bc83e7-66dd-4133-82cd-d4301c233f9d-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292260 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292271 4922 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.292284 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98bc83e7-66dd-4133-82cd-d4301c233f9d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.567717 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" event={"ID":"98bc83e7-66dd-4133-82cd-d4301c233f9d","Type":"ContainerDied","Data":"daacfe7efb7e08b3efeba8d32115cca5950451e2edadcd24196c068b03499b35"} Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.567765 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daacfe7efb7e08b3efeba8d32115cca5950451e2edadcd24196c068b03499b35" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.567825 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bxngq" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.680093 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt"] Feb 18 12:11:08 crc kubenswrapper[4922]: E0218 12:11:08.683080 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bc83e7-66dd-4133-82cd-d4301c233f9d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.683125 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bc83e7-66dd-4133-82cd-d4301c233f9d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.683495 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bc83e7-66dd-4133-82cd-d4301c233f9d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.684732 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.687492 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.687664 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.687812 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.687826 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.687976 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.697746 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt"] Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.804121 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.804208 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/45d322f9-bf52-4679-ab43-9d222bc09a14-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.804442 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.804836 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l94rs\" (UniqueName: \"kubernetes.io/projected/45d322f9-bf52-4679-ab43-9d222bc09a14-kube-api-access-l94rs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.804893 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.907503 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l94rs\" (UniqueName: \"kubernetes.io/projected/45d322f9-bf52-4679-ab43-9d222bc09a14-kube-api-access-l94rs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.907559 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.907635 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.907688 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/45d322f9-bf52-4679-ab43-9d222bc09a14-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.907739 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.908758 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/45d322f9-bf52-4679-ab43-9d222bc09a14-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.912532 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.914639 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.916154 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:08 crc kubenswrapper[4922]: I0218 12:11:08.925194 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l94rs\" (UniqueName: \"kubernetes.io/projected/45d322f9-bf52-4679-ab43-9d222bc09a14-kube-api-access-l94rs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8jmtt\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:09 crc kubenswrapper[4922]: I0218 12:11:09.001662 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:11:09 crc kubenswrapper[4922]: I0218 12:11:09.540172 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt"] Feb 18 12:11:09 crc kubenswrapper[4922]: I0218 12:11:09.581929 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" event={"ID":"45d322f9-bf52-4679-ab43-9d222bc09a14","Type":"ContainerStarted","Data":"cb909c8a47f940a517a676dc5536d0bf0b70a866ff9c6682d5855478d99f6690"} Feb 18 12:11:10 crc kubenswrapper[4922]: I0218 12:11:10.592499 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" event={"ID":"45d322f9-bf52-4679-ab43-9d222bc09a14","Type":"ContainerStarted","Data":"14e866305ca7680d979cf41bd87ba2c36720433b54f20a2b9b9d179d1f1c9a18"} Feb 18 12:11:10 crc kubenswrapper[4922]: I0218 12:11:10.619502 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" podStartSLOduration=2.185812717 podStartE2EDuration="2.619479258s" podCreationTimestamp="2026-02-18 12:11:08 +0000 UTC" firstStartedPulling="2026-02-18 12:11:09.551952985 +0000 UTC m=+2071.279657065" lastFinishedPulling="2026-02-18 12:11:09.985619526 +0000 UTC m=+2071.713323606" observedRunningTime="2026-02-18 12:11:10.608458861 +0000 UTC m=+2072.336162961" watchObservedRunningTime="2026-02-18 12:11:10.619479258 +0000 UTC m=+2072.347183348" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.190305 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9wgh"] Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.206198 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.215260 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9wgh"] Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.394659 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpsh7\" (UniqueName: \"kubernetes.io/projected/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-kube-api-access-rpsh7\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.394791 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-utilities\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.394853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-catalog-content\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.496275 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpsh7\" (UniqueName: \"kubernetes.io/projected/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-kube-api-access-rpsh7\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.496709 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-utilities\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.496851 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-catalog-content\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.497567 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-utilities\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.497585 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-catalog-content\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.519381 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpsh7\" (UniqueName: \"kubernetes.io/projected/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-kube-api-access-rpsh7\") pod \"redhat-operators-f9wgh\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:50 crc kubenswrapper[4922]: I0218 12:11:50.536101 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:11:51 crc kubenswrapper[4922]: I0218 12:11:51.022707 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9wgh"] Feb 18 12:11:51 crc kubenswrapper[4922]: I0218 12:11:51.978134 4922 generic.go:334] "Generic (PLEG): container finished" podID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerID="9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2" exitCode=0 Feb 18 12:11:51 crc kubenswrapper[4922]: I0218 12:11:51.978214 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wgh" event={"ID":"0c0e4049-cc63-4ef1-aef5-0542ca9b9667","Type":"ContainerDied","Data":"9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2"} Feb 18 12:11:51 crc kubenswrapper[4922]: I0218 12:11:51.978475 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wgh" event={"ID":"0c0e4049-cc63-4ef1-aef5-0542ca9b9667","Type":"ContainerStarted","Data":"a04d7d069df5d9960d42f6d4bd51a40213af02b5d84049f2834e25d340b39cd9"} Feb 18 12:11:54 crc kubenswrapper[4922]: I0218 12:11:54.023837 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wgh" event={"ID":"0c0e4049-cc63-4ef1-aef5-0542ca9b9667","Type":"ContainerStarted","Data":"6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc"} Feb 18 12:11:56 crc kubenswrapper[4922]: I0218 12:11:56.044547 4922 generic.go:334] "Generic (PLEG): container finished" podID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerID="6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc" exitCode=0 Feb 18 12:11:56 crc kubenswrapper[4922]: I0218 12:11:56.044622 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wgh" event={"ID":"0c0e4049-cc63-4ef1-aef5-0542ca9b9667","Type":"ContainerDied","Data":"6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc"} Feb 18 12:11:58 crc kubenswrapper[4922]: I0218 12:11:58.068484 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wgh" event={"ID":"0c0e4049-cc63-4ef1-aef5-0542ca9b9667","Type":"ContainerStarted","Data":"ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24"} Feb 18 12:12:00 crc kubenswrapper[4922]: I0218 12:12:00.536274 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:12:00 crc kubenswrapper[4922]: I0218 12:12:00.536906 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:12:01 crc kubenswrapper[4922]: I0218 12:12:01.593743 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9wgh" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="registry-server" probeResult="failure" output=< Feb 18 12:12:01 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:12:01 crc kubenswrapper[4922]: > Feb 18 12:12:09 crc kubenswrapper[4922]: I0218 12:12:09.808096 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:12:09 crc kubenswrapper[4922]: I0218 12:12:09.808718 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:12:11 crc kubenswrapper[4922]: I0218 12:12:11.592685 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9wgh" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="registry-server" probeResult="failure" output=< Feb 18 12:12:11 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:12:11 crc kubenswrapper[4922]: > Feb 18 12:12:13 crc kubenswrapper[4922]: I0218 12:12:13.329288 4922 generic.go:334] "Generic (PLEG): container finished" podID="45d322f9-bf52-4679-ab43-9d222bc09a14" containerID="14e866305ca7680d979cf41bd87ba2c36720433b54f20a2b9b9d179d1f1c9a18" exitCode=0 Feb 18 12:12:13 crc kubenswrapper[4922]: I0218 12:12:13.329460 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" event={"ID":"45d322f9-bf52-4679-ab43-9d222bc09a14","Type":"ContainerDied","Data":"14e866305ca7680d979cf41bd87ba2c36720433b54f20a2b9b9d179d1f1c9a18"} Feb 18 12:12:13 crc kubenswrapper[4922]: I0218 12:12:13.359630 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9wgh" podStartSLOduration=18.123089104 podStartE2EDuration="23.359607624s" podCreationTimestamp="2026-02-18 12:11:50 +0000 UTC" firstStartedPulling="2026-02-18 12:11:51.98104474 +0000 UTC m=+2113.708748820" lastFinishedPulling="2026-02-18 12:11:57.21756326 +0000 UTC m=+2118.945267340" observedRunningTime="2026-02-18 12:11:58.092843963 +0000 UTC m=+2119.820548043" watchObservedRunningTime="2026-02-18 12:12:13.359607624 +0000 UTC m=+2135.087311704" Feb 18 12:12:14 crc kubenswrapper[4922]: I0218 12:12:14.948492 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.086677 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/45d322f9-bf52-4679-ab43-9d222bc09a14-ovncontroller-config-0\") pod \"45d322f9-bf52-4679-ab43-9d222bc09a14\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.086766 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l94rs\" (UniqueName: \"kubernetes.io/projected/45d322f9-bf52-4679-ab43-9d222bc09a14-kube-api-access-l94rs\") pod \"45d322f9-bf52-4679-ab43-9d222bc09a14\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.086817 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-inventory\") pod \"45d322f9-bf52-4679-ab43-9d222bc09a14\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.087116 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ovn-combined-ca-bundle\") pod \"45d322f9-bf52-4679-ab43-9d222bc09a14\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.088022 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ssh-key-openstack-edpm-ipam\") pod \"45d322f9-bf52-4679-ab43-9d222bc09a14\" (UID: \"45d322f9-bf52-4679-ab43-9d222bc09a14\") " Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.097280 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d322f9-bf52-4679-ab43-9d222bc09a14-kube-api-access-l94rs" (OuterVolumeSpecName: "kube-api-access-l94rs") pod "45d322f9-bf52-4679-ab43-9d222bc09a14" (UID: "45d322f9-bf52-4679-ab43-9d222bc09a14"). InnerVolumeSpecName "kube-api-access-l94rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.101831 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "45d322f9-bf52-4679-ab43-9d222bc09a14" (UID: "45d322f9-bf52-4679-ab43-9d222bc09a14"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.125788 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d322f9-bf52-4679-ab43-9d222bc09a14-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "45d322f9-bf52-4679-ab43-9d222bc09a14" (UID: "45d322f9-bf52-4679-ab43-9d222bc09a14"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.132137 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "45d322f9-bf52-4679-ab43-9d222bc09a14" (UID: "45d322f9-bf52-4679-ab43-9d222bc09a14"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.138503 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-inventory" (OuterVolumeSpecName: "inventory") pod "45d322f9-bf52-4679-ab43-9d222bc09a14" (UID: "45d322f9-bf52-4679-ab43-9d222bc09a14"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.191753 4922 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.191807 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.191819 4922 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/45d322f9-bf52-4679-ab43-9d222bc09a14-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.191830 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l94rs\" (UniqueName: \"kubernetes.io/projected/45d322f9-bf52-4679-ab43-9d222bc09a14-kube-api-access-l94rs\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.191843 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d322f9-bf52-4679-ab43-9d222bc09a14-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.352337 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" event={"ID":"45d322f9-bf52-4679-ab43-9d222bc09a14","Type":"ContainerDied","Data":"cb909c8a47f940a517a676dc5536d0bf0b70a866ff9c6682d5855478d99f6690"} Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.352400 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb909c8a47f940a517a676dc5536d0bf0b70a866ff9c6682d5855478d99f6690" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.352515 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8jmtt" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.526168 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6"] Feb 18 12:12:15 crc kubenswrapper[4922]: E0218 12:12:15.526689 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d322f9-bf52-4679-ab43-9d222bc09a14" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.526711 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d322f9-bf52-4679-ab43-9d222bc09a14" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.526959 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d322f9-bf52-4679-ab43-9d222bc09a14" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.527846 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.535135 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.535328 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.535457 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.537240 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.537514 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.538636 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.539925 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6"] Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.704467 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htkvl\" (UniqueName: \"kubernetes.io/projected/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-kube-api-access-htkvl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.704544 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.704892 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.705038 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.705113 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.705190 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.806633 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.806987 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.807020 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.807067 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.807150 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htkvl\" (UniqueName: \"kubernetes.io/projected/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-kube-api-access-htkvl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.807190 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.811629 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.811737 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.812247 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.815930 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.817643 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.826712 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htkvl\" (UniqueName: \"kubernetes.io/projected/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-kube-api-access-htkvl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:15 crc kubenswrapper[4922]: I0218 12:12:15.861539 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:12:16 crc kubenswrapper[4922]: I0218 12:12:16.443646 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6"] Feb 18 12:12:17 crc kubenswrapper[4922]: I0218 12:12:17.371588 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" event={"ID":"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3","Type":"ContainerStarted","Data":"f675a65ec5122eac9b3db3da6055626f4e90d9a34b75684d8bd611667cdcc4bc"} Feb 18 12:12:17 crc kubenswrapper[4922]: I0218 12:12:17.371902 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" event={"ID":"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3","Type":"ContainerStarted","Data":"a81e1f243cb58e624334569d3d46568771a74e45026b207775af54bf872d2f98"} Feb 18 12:12:17 crc kubenswrapper[4922]: I0218 12:12:17.394767 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" podStartSLOduration=1.857260345 podStartE2EDuration="2.39474805s" podCreationTimestamp="2026-02-18 12:12:15 +0000 UTC" firstStartedPulling="2026-02-18 12:12:16.473786308 +0000 UTC m=+2138.201490388" lastFinishedPulling="2026-02-18 12:12:17.011274013 +0000 UTC m=+2138.738978093" observedRunningTime="2026-02-18 12:12:17.393908789 +0000 UTC m=+2139.121612879" watchObservedRunningTime="2026-02-18 12:12:17.39474805 +0000 UTC m=+2139.122452130" Feb 18 12:12:19 crc kubenswrapper[4922]: I0218 12:12:19.921339 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5vz25"] Feb 18 12:12:19 crc kubenswrapper[4922]: I0218 12:12:19.924678 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:19 crc kubenswrapper[4922]: I0218 12:12:19.940209 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vz25"] Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.100222 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-catalog-content\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.100300 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tct6\" (UniqueName: \"kubernetes.io/projected/c22488e9-a8cd-4400-8b66-15074c7726ac-kube-api-access-6tct6\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.100460 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-utilities\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.201712 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-utilities\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.201902 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-catalog-content\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.201940 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tct6\" (UniqueName: \"kubernetes.io/projected/c22488e9-a8cd-4400-8b66-15074c7726ac-kube-api-access-6tct6\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.202652 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-utilities\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.202862 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-catalog-content\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.241559 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tct6\" (UniqueName: \"kubernetes.io/projected/c22488e9-a8cd-4400-8b66-15074c7726ac-kube-api-access-6tct6\") pod \"certified-operators-5vz25\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.256435 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:20 crc kubenswrapper[4922]: I0218 12:12:20.802021 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vz25"] Feb 18 12:12:21 crc kubenswrapper[4922]: I0218 12:12:21.415343 4922 generic.go:334] "Generic (PLEG): container finished" podID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerID="97ef2eea12376eeab2e6df9623ea90d67e62db6f4755d6707c06eecdb163bde2" exitCode=0 Feb 18 12:12:21 crc kubenswrapper[4922]: I0218 12:12:21.415425 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vz25" event={"ID":"c22488e9-a8cd-4400-8b66-15074c7726ac","Type":"ContainerDied","Data":"97ef2eea12376eeab2e6df9623ea90d67e62db6f4755d6707c06eecdb163bde2"} Feb 18 12:12:21 crc kubenswrapper[4922]: I0218 12:12:21.415453 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vz25" event={"ID":"c22488e9-a8cd-4400-8b66-15074c7726ac","Type":"ContainerStarted","Data":"8baac64d2919d5b2b53ee7bfc0c20fa53158f07aa3235e48a21697aa69aeb1df"} Feb 18 12:12:21 crc kubenswrapper[4922]: I0218 12:12:21.593225 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9wgh" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="registry-server" probeResult="failure" output=< Feb 18 12:12:21 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:12:21 crc kubenswrapper[4922]: > Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.115245 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c7wgq"] Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.121694 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.133573 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7wgq"] Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.189692 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-utilities\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.189814 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxqg\" (UniqueName: \"kubernetes.io/projected/12118652-851f-47e2-ac7d-42304ca159f7-kube-api-access-qcxqg\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.189878 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-catalog-content\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.292846 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-utilities\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.292976 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxqg\" (UniqueName: \"kubernetes.io/projected/12118652-851f-47e2-ac7d-42304ca159f7-kube-api-access-qcxqg\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.293044 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-catalog-content\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.293398 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-utilities\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.293800 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-catalog-content\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.316802 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxqg\" (UniqueName: \"kubernetes.io/projected/12118652-851f-47e2-ac7d-42304ca159f7-kube-api-access-qcxqg\") pod \"redhat-marketplace-c7wgq\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.454191 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:23 crc kubenswrapper[4922]: I0218 12:12:23.954575 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7wgq"] Feb 18 12:12:24 crc kubenswrapper[4922]: I0218 12:12:24.441730 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7wgq" event={"ID":"12118652-851f-47e2-ac7d-42304ca159f7","Type":"ContainerStarted","Data":"7e19e5b7a0886f12574215201d36d33ceaa48e9faa615612e6dc043561627932"} Feb 18 12:12:25 crc kubenswrapper[4922]: I0218 12:12:25.452520 4922 generic.go:334] "Generic (PLEG): container finished" podID="12118652-851f-47e2-ac7d-42304ca159f7" containerID="dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf" exitCode=0 Feb 18 12:12:25 crc kubenswrapper[4922]: I0218 12:12:25.452633 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7wgq" event={"ID":"12118652-851f-47e2-ac7d-42304ca159f7","Type":"ContainerDied","Data":"dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf"} Feb 18 12:12:25 crc kubenswrapper[4922]: I0218 12:12:25.454448 4922 generic.go:334] "Generic (PLEG): container finished" podID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerID="3e055de6ac902e5ab3c63140a9b1f08e83d9d8d12f85796a582b63697fa0133a" exitCode=0 Feb 18 12:12:25 crc kubenswrapper[4922]: I0218 12:12:25.454478 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vz25" event={"ID":"c22488e9-a8cd-4400-8b66-15074c7726ac","Type":"ContainerDied","Data":"3e055de6ac902e5ab3c63140a9b1f08e83d9d8d12f85796a582b63697fa0133a"} Feb 18 12:12:27 crc kubenswrapper[4922]: I0218 12:12:27.483465 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vz25" event={"ID":"c22488e9-a8cd-4400-8b66-15074c7726ac","Type":"ContainerStarted","Data":"f452d7b83cd45bdffb82f76f87e8bc9e821a596574b505e62f7c495e49975eb8"} Feb 18 12:12:27 crc kubenswrapper[4922]: I0218 12:12:27.487726 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7wgq" event={"ID":"12118652-851f-47e2-ac7d-42304ca159f7","Type":"ContainerStarted","Data":"eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa"} Feb 18 12:12:27 crc kubenswrapper[4922]: I0218 12:12:27.512600 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5vz25" podStartSLOduration=2.793957605 podStartE2EDuration="8.512580495s" podCreationTimestamp="2026-02-18 12:12:19 +0000 UTC" firstStartedPulling="2026-02-18 12:12:21.417622338 +0000 UTC m=+2143.145326408" lastFinishedPulling="2026-02-18 12:12:27.136245218 +0000 UTC m=+2148.863949298" observedRunningTime="2026-02-18 12:12:27.501495006 +0000 UTC m=+2149.229199106" watchObservedRunningTime="2026-02-18 12:12:27.512580495 +0000 UTC m=+2149.240284575" Feb 18 12:12:29 crc kubenswrapper[4922]: I0218 12:12:29.513175 4922 generic.go:334] "Generic (PLEG): container finished" podID="12118652-851f-47e2-ac7d-42304ca159f7" containerID="eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa" exitCode=0 Feb 18 12:12:29 crc kubenswrapper[4922]: I0218 12:12:29.513265 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7wgq" event={"ID":"12118652-851f-47e2-ac7d-42304ca159f7","Type":"ContainerDied","Data":"eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa"} Feb 18 12:12:30 crc kubenswrapper[4922]: I0218 12:12:30.257496 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:30 crc kubenswrapper[4922]: I0218 12:12:30.258711 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:30 crc kubenswrapper[4922]: I0218 12:12:30.589882 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:12:30 crc kubenswrapper[4922]: I0218 12:12:30.648888 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:12:31 crc kubenswrapper[4922]: I0218 12:12:31.308830 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5vz25" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="registry-server" probeResult="failure" output=< Feb 18 12:12:31 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:12:31 crc kubenswrapper[4922]: > Feb 18 12:12:31 crc kubenswrapper[4922]: I0218 12:12:31.531714 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7wgq" event={"ID":"12118652-851f-47e2-ac7d-42304ca159f7","Type":"ContainerStarted","Data":"8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607"} Feb 18 12:12:31 crc kubenswrapper[4922]: I0218 12:12:31.557660 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c7wgq" podStartSLOduration=3.6933986450000003 podStartE2EDuration="8.557638481s" podCreationTimestamp="2026-02-18 12:12:23 +0000 UTC" firstStartedPulling="2026-02-18 12:12:25.454647611 +0000 UTC m=+2147.182351681" lastFinishedPulling="2026-02-18 12:12:30.318887447 +0000 UTC m=+2152.046591517" observedRunningTime="2026-02-18 12:12:31.549311572 +0000 UTC m=+2153.277015642" watchObservedRunningTime="2026-02-18 12:12:31.557638481 +0000 UTC m=+2153.285342571" Feb 18 12:12:33 crc kubenswrapper[4922]: I0218 12:12:33.455155 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:33 crc kubenswrapper[4922]: I0218 12:12:33.455200 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:33 crc kubenswrapper[4922]: I0218 12:12:33.506130 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:37 crc kubenswrapper[4922]: I0218 12:12:37.704429 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9wgh"] Feb 18 12:12:37 crc kubenswrapper[4922]: I0218 12:12:37.705133 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9wgh" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="registry-server" containerID="cri-o://ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24" gracePeriod=2 Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.162642 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.187496 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-catalog-content\") pod \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.187566 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpsh7\" (UniqueName: \"kubernetes.io/projected/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-kube-api-access-rpsh7\") pod \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.194624 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-kube-api-access-rpsh7" (OuterVolumeSpecName: "kube-api-access-rpsh7") pod "0c0e4049-cc63-4ef1-aef5-0542ca9b9667" (UID: "0c0e4049-cc63-4ef1-aef5-0542ca9b9667"). InnerVolumeSpecName "kube-api-access-rpsh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.289353 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-utilities\") pod \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\" (UID: \"0c0e4049-cc63-4ef1-aef5-0542ca9b9667\") " Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.289863 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpsh7\" (UniqueName: \"kubernetes.io/projected/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-kube-api-access-rpsh7\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.290180 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-utilities" (OuterVolumeSpecName: "utilities") pod "0c0e4049-cc63-4ef1-aef5-0542ca9b9667" (UID: "0c0e4049-cc63-4ef1-aef5-0542ca9b9667"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.306307 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c0e4049-cc63-4ef1-aef5-0542ca9b9667" (UID: "0c0e4049-cc63-4ef1-aef5-0542ca9b9667"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.391340 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.391427 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c0e4049-cc63-4ef1-aef5-0542ca9b9667-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.591630 4922 generic.go:334] "Generic (PLEG): container finished" podID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerID="ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24" exitCode=0 Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.591673 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wgh" event={"ID":"0c0e4049-cc63-4ef1-aef5-0542ca9b9667","Type":"ContainerDied","Data":"ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24"} Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.591699 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9wgh" event={"ID":"0c0e4049-cc63-4ef1-aef5-0542ca9b9667","Type":"ContainerDied","Data":"a04d7d069df5d9960d42f6d4bd51a40213af02b5d84049f2834e25d340b39cd9"} Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.591716 4922 scope.go:117] "RemoveContainer" containerID="ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.591852 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9wgh" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.616311 4922 scope.go:117] "RemoveContainer" containerID="6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.626248 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9wgh"] Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.635897 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9wgh"] Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.662106 4922 scope.go:117] "RemoveContainer" containerID="9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.685753 4922 scope.go:117] "RemoveContainer" containerID="ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24" Feb 18 12:12:38 crc kubenswrapper[4922]: E0218 12:12:38.686251 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24\": container with ID starting with ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24 not found: ID does not exist" containerID="ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.686478 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24"} err="failed to get container status \"ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24\": rpc error: code = NotFound desc = could not find container \"ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24\": container with ID starting with ce2a92bf867d95dab323be5ac67b666234b7cc50ed9bdbfe009004e18881fc24 not found: ID does not exist" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.686579 4922 scope.go:117] "RemoveContainer" containerID="6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc" Feb 18 12:12:38 crc kubenswrapper[4922]: E0218 12:12:38.687152 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc\": container with ID starting with 6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc not found: ID does not exist" containerID="6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.687236 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc"} err="failed to get container status \"6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc\": rpc error: code = NotFound desc = could not find container \"6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc\": container with ID starting with 6a8d4cd1b4b048bf54c76705d96da87ce8de69c632ee41b7145096277e801fbc not found: ID does not exist" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.687296 4922 scope.go:117] "RemoveContainer" containerID="9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2" Feb 18 12:12:38 crc kubenswrapper[4922]: E0218 12:12:38.687734 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2\": container with ID starting with 9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2 not found: ID does not exist" containerID="9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.687763 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2"} err="failed to get container status \"9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2\": rpc error: code = NotFound desc = could not find container \"9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2\": container with ID starting with 9890a2620c97984231eef2e007309f75dd81b3c214c8050c5d3c9d04775450e2 not found: ID does not exist" Feb 18 12:12:38 crc kubenswrapper[4922]: I0218 12:12:38.985021 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" path="/var/lib/kubelet/pods/0c0e4049-cc63-4ef1-aef5-0542ca9b9667/volumes" Feb 18 12:12:39 crc kubenswrapper[4922]: I0218 12:12:39.807855 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:12:39 crc kubenswrapper[4922]: I0218 12:12:39.808225 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:12:40 crc kubenswrapper[4922]: I0218 12:12:40.309409 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:40 crc kubenswrapper[4922]: I0218 12:12:40.364331 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:42 crc kubenswrapper[4922]: I0218 12:12:42.318602 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vz25"] Feb 18 12:12:42 crc kubenswrapper[4922]: I0218 12:12:42.319805 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5vz25" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="registry-server" containerID="cri-o://f452d7b83cd45bdffb82f76f87e8bc9e821a596574b505e62f7c495e49975eb8" gracePeriod=2 Feb 18 12:12:42 crc kubenswrapper[4922]: I0218 12:12:42.632780 4922 generic.go:334] "Generic (PLEG): container finished" podID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerID="f452d7b83cd45bdffb82f76f87e8bc9e821a596574b505e62f7c495e49975eb8" exitCode=0 Feb 18 12:12:42 crc kubenswrapper[4922]: I0218 12:12:42.632874 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vz25" event={"ID":"c22488e9-a8cd-4400-8b66-15074c7726ac","Type":"ContainerDied","Data":"f452d7b83cd45bdffb82f76f87e8bc9e821a596574b505e62f7c495e49975eb8"} Feb 18 12:12:42 crc kubenswrapper[4922]: I0218 12:12:42.815320 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.006848 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tct6\" (UniqueName: \"kubernetes.io/projected/c22488e9-a8cd-4400-8b66-15074c7726ac-kube-api-access-6tct6\") pod \"c22488e9-a8cd-4400-8b66-15074c7726ac\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.006913 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-utilities\") pod \"c22488e9-a8cd-4400-8b66-15074c7726ac\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.007067 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-catalog-content\") pod \"c22488e9-a8cd-4400-8b66-15074c7726ac\" (UID: \"c22488e9-a8cd-4400-8b66-15074c7726ac\") " Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.008022 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-utilities" (OuterVolumeSpecName: "utilities") pod "c22488e9-a8cd-4400-8b66-15074c7726ac" (UID: "c22488e9-a8cd-4400-8b66-15074c7726ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.013162 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22488e9-a8cd-4400-8b66-15074c7726ac-kube-api-access-6tct6" (OuterVolumeSpecName: "kube-api-access-6tct6") pod "c22488e9-a8cd-4400-8b66-15074c7726ac" (UID: "c22488e9-a8cd-4400-8b66-15074c7726ac"). InnerVolumeSpecName "kube-api-access-6tct6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.062345 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c22488e9-a8cd-4400-8b66-15074c7726ac" (UID: "c22488e9-a8cd-4400-8b66-15074c7726ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.113396 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.113424 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c22488e9-a8cd-4400-8b66-15074c7726ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.113436 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tct6\" (UniqueName: \"kubernetes.io/projected/c22488e9-a8cd-4400-8b66-15074c7726ac-kube-api-access-6tct6\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.505341 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.647831 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vz25" event={"ID":"c22488e9-a8cd-4400-8b66-15074c7726ac","Type":"ContainerDied","Data":"8baac64d2919d5b2b53ee7bfc0c20fa53158f07aa3235e48a21697aa69aeb1df"} Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.647896 4922 scope.go:117] "RemoveContainer" containerID="f452d7b83cd45bdffb82f76f87e8bc9e821a596574b505e62f7c495e49975eb8" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.647957 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vz25" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.668839 4922 scope.go:117] "RemoveContainer" containerID="3e055de6ac902e5ab3c63140a9b1f08e83d9d8d12f85796a582b63697fa0133a" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.694953 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vz25"] Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.705593 4922 scope.go:117] "RemoveContainer" containerID="97ef2eea12376eeab2e6df9623ea90d67e62db6f4755d6707c06eecdb163bde2" Feb 18 12:12:43 crc kubenswrapper[4922]: I0218 12:12:43.706445 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5vz25"] Feb 18 12:12:44 crc kubenswrapper[4922]: I0218 12:12:44.986244 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" path="/var/lib/kubelet/pods/c22488e9-a8cd-4400-8b66-15074c7726ac/volumes" Feb 18 12:12:45 crc kubenswrapper[4922]: I0218 12:12:45.908785 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7wgq"] Feb 18 12:12:45 crc kubenswrapper[4922]: I0218 12:12:45.909168 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c7wgq" podUID="12118652-851f-47e2-ac7d-42304ca159f7" containerName="registry-server" containerID="cri-o://8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607" gracePeriod=2 Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.421454 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.593074 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-catalog-content\") pod \"12118652-851f-47e2-ac7d-42304ca159f7\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.593194 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-utilities\") pod \"12118652-851f-47e2-ac7d-42304ca159f7\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.593441 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcxqg\" (UniqueName: \"kubernetes.io/projected/12118652-851f-47e2-ac7d-42304ca159f7-kube-api-access-qcxqg\") pod \"12118652-851f-47e2-ac7d-42304ca159f7\" (UID: \"12118652-851f-47e2-ac7d-42304ca159f7\") " Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.594171 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-utilities" (OuterVolumeSpecName: "utilities") pod "12118652-851f-47e2-ac7d-42304ca159f7" (UID: "12118652-851f-47e2-ac7d-42304ca159f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.600477 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12118652-851f-47e2-ac7d-42304ca159f7-kube-api-access-qcxqg" (OuterVolumeSpecName: "kube-api-access-qcxqg") pod "12118652-851f-47e2-ac7d-42304ca159f7" (UID: "12118652-851f-47e2-ac7d-42304ca159f7"). InnerVolumeSpecName "kube-api-access-qcxqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.622111 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12118652-851f-47e2-ac7d-42304ca159f7" (UID: "12118652-851f-47e2-ac7d-42304ca159f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.684395 4922 generic.go:334] "Generic (PLEG): container finished" podID="12118652-851f-47e2-ac7d-42304ca159f7" containerID="8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607" exitCode=0 Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.684455 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7wgq" event={"ID":"12118652-851f-47e2-ac7d-42304ca159f7","Type":"ContainerDied","Data":"8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607"} Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.684495 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7wgq" event={"ID":"12118652-851f-47e2-ac7d-42304ca159f7","Type":"ContainerDied","Data":"7e19e5b7a0886f12574215201d36d33ceaa48e9faa615612e6dc043561627932"} Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.684521 4922 scope.go:117] "RemoveContainer" containerID="8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.684683 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7wgq" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.696626 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.696675 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12118652-851f-47e2-ac7d-42304ca159f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.696688 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcxqg\" (UniqueName: \"kubernetes.io/projected/12118652-851f-47e2-ac7d-42304ca159f7-kube-api-access-qcxqg\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.731418 4922 scope.go:117] "RemoveContainer" containerID="eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.734812 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7wgq"] Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.745667 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7wgq"] Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.757096 4922 scope.go:117] "RemoveContainer" containerID="dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.805864 4922 scope.go:117] "RemoveContainer" containerID="8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607" Feb 18 12:12:46 crc kubenswrapper[4922]: E0218 12:12:46.806392 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607\": container with ID starting with 8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607 not found: ID does not exist" containerID="8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.806438 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607"} err="failed to get container status \"8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607\": rpc error: code = NotFound desc = could not find container \"8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607\": container with ID starting with 8b6f8a1b87bc5c610a532fb17ee32a73f90f8b4873b63a0ecc7e0fb900c07607 not found: ID does not exist" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.806470 4922 scope.go:117] "RemoveContainer" containerID="eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa" Feb 18 12:12:46 crc kubenswrapper[4922]: E0218 12:12:46.806930 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa\": container with ID starting with eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa not found: ID does not exist" containerID="eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.806951 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa"} err="failed to get container status \"eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa\": rpc error: code = NotFound desc = could not find container \"eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa\": container with ID starting with eddc46b683c01644b3304efd906e75b5a9fd440e15fcfe930f86c07c2b3ba4fa not found: ID does not exist" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.806966 4922 scope.go:117] "RemoveContainer" containerID="dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf" Feb 18 12:12:46 crc kubenswrapper[4922]: E0218 12:12:46.807329 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf\": container with ID starting with dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf not found: ID does not exist" containerID="dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.807398 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf"} err="failed to get container status \"dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf\": rpc error: code = NotFound desc = could not find container \"dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf\": container with ID starting with dd1dba4d12b91de3b2090b7189eb56e82f21a53041e34adc93da1219ee52a1cf not found: ID does not exist" Feb 18 12:12:46 crc kubenswrapper[4922]: I0218 12:12:46.998041 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12118652-851f-47e2-ac7d-42304ca159f7" path="/var/lib/kubelet/pods/12118652-851f-47e2-ac7d-42304ca159f7/volumes" Feb 18 12:13:04 crc kubenswrapper[4922]: I0218 12:13:04.862020 4922 generic.go:334] "Generic (PLEG): container finished" podID="9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" containerID="f675a65ec5122eac9b3db3da6055626f4e90d9a34b75684d8bd611667cdcc4bc" exitCode=0 Feb 18 12:13:04 crc kubenswrapper[4922]: I0218 12:13:04.862113 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" event={"ID":"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3","Type":"ContainerDied","Data":"f675a65ec5122eac9b3db3da6055626f4e90d9a34b75684d8bd611667cdcc4bc"} Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.282316 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.311541 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-nova-metadata-neutron-config-0\") pod \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.311951 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htkvl\" (UniqueName: \"kubernetes.io/projected/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-kube-api-access-htkvl\") pod \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.312018 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.312070 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-metadata-combined-ca-bundle\") pod \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.312176 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-inventory\") pod \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.312223 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-ssh-key-openstack-edpm-ipam\") pod \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\" (UID: \"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3\") " Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.317408 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" (UID: "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.318408 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-kube-api-access-htkvl" (OuterVolumeSpecName: "kube-api-access-htkvl") pod "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" (UID: "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3"). InnerVolumeSpecName "kube-api-access-htkvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.341339 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" (UID: "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.350209 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" (UID: "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.358151 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" (UID: "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.371257 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-inventory" (OuterVolumeSpecName: "inventory") pod "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" (UID: "9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.417544 4922 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.419058 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htkvl\" (UniqueName: \"kubernetes.io/projected/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-kube-api-access-htkvl\") on node \"crc\" DevicePath \"\"" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.419189 4922 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.419261 4922 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.419335 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.419500 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.882013 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" event={"ID":"9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3","Type":"ContainerDied","Data":"a81e1f243cb58e624334569d3d46568771a74e45026b207775af54bf872d2f98"} Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.882316 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a81e1f243cb58e624334569d3d46568771a74e45026b207775af54bf872d2f98" Feb 18 12:13:06 crc kubenswrapper[4922]: I0218 12:13:06.882062 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.000812 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf"] Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001277 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="extract-utilities" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001294 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="extract-utilities" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001313 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="extract-content" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001323 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="extract-content" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001342 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="extract-utilities" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001352 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="extract-utilities" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001392 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001402 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001418 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="extract-content" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001425 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="extract-content" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001435 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001443 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001458 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12118652-851f-47e2-ac7d-42304ca159f7" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001466 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="12118652-851f-47e2-ac7d-42304ca159f7" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001478 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001488 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001504 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12118652-851f-47e2-ac7d-42304ca159f7" containerName="extract-utilities" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001513 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="12118652-851f-47e2-ac7d-42304ca159f7" containerName="extract-utilities" Feb 18 12:13:07 crc kubenswrapper[4922]: E0218 12:13:07.001537 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12118652-851f-47e2-ac7d-42304ca159f7" containerName="extract-content" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001544 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="12118652-851f-47e2-ac7d-42304ca159f7" containerName="extract-content" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001786 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22488e9-a8cd-4400-8b66-15074c7726ac" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001820 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="12118652-851f-47e2-ac7d-42304ca159f7" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001832 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.001845 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0e4049-cc63-4ef1-aef5-0542ca9b9667" containerName="registry-server" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.006523 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.008741 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.008913 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.009685 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.009964 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.011004 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.032665 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.033337 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6h8x\" (UniqueName: \"kubernetes.io/projected/7d136111-09bf-46fe-aaf8-868a27741f9b-kube-api-access-c6h8x\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.034130 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.034266 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.034313 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.039666 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf"] Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.136275 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6h8x\" (UniqueName: \"kubernetes.io/projected/7d136111-09bf-46fe-aaf8-868a27741f9b-kube-api-access-c6h8x\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.136402 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.136493 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.136640 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.136768 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.142952 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.143020 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.143779 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.144055 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.158772 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6h8x\" (UniqueName: \"kubernetes.io/projected/7d136111-09bf-46fe-aaf8-868a27741f9b-kube-api-access-c6h8x\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.334568 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:13:07 crc kubenswrapper[4922]: I0218 12:13:07.898954 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf"] Feb 18 12:13:08 crc kubenswrapper[4922]: I0218 12:13:08.912819 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" event={"ID":"7d136111-09bf-46fe-aaf8-868a27741f9b","Type":"ContainerStarted","Data":"004c81ea37bbcda1faf27f9cf0255e5a647f141e3451153704351a8c28aa6714"} Feb 18 12:13:08 crc kubenswrapper[4922]: I0218 12:13:08.912899 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" event={"ID":"7d136111-09bf-46fe-aaf8-868a27741f9b","Type":"ContainerStarted","Data":"d334758ef7c460ce4bd03b306cede9ac406be6bf21f14ff74b552e5a162c62ec"} Feb 18 12:13:08 crc kubenswrapper[4922]: I0218 12:13:08.941836 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" podStartSLOduration=2.199500702 podStartE2EDuration="2.941813855s" podCreationTimestamp="2026-02-18 12:13:06 +0000 UTC" firstStartedPulling="2026-02-18 12:13:07.910595766 +0000 UTC m=+2189.638299846" lastFinishedPulling="2026-02-18 12:13:08.652908919 +0000 UTC m=+2190.380612999" observedRunningTime="2026-02-18 12:13:08.928853298 +0000 UTC m=+2190.656557398" watchObservedRunningTime="2026-02-18 12:13:08.941813855 +0000 UTC m=+2190.669517935" Feb 18 12:13:09 crc kubenswrapper[4922]: I0218 12:13:09.806961 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:13:09 crc kubenswrapper[4922]: I0218 12:13:09.807272 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:13:09 crc kubenswrapper[4922]: I0218 12:13:09.807316 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:13:09 crc kubenswrapper[4922]: I0218 12:13:09.808407 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f96eabfecb9bc217d8636beb3db027fe21b0aa914ece8706e84fc03338c589ff"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:13:09 crc kubenswrapper[4922]: I0218 12:13:09.809528 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://f96eabfecb9bc217d8636beb3db027fe21b0aa914ece8706e84fc03338c589ff" gracePeriod=600 Feb 18 12:13:10 crc kubenswrapper[4922]: I0218 12:13:10.938780 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="f96eabfecb9bc217d8636beb3db027fe21b0aa914ece8706e84fc03338c589ff" exitCode=0 Feb 18 12:13:10 crc kubenswrapper[4922]: I0218 12:13:10.938919 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"f96eabfecb9bc217d8636beb3db027fe21b0aa914ece8706e84fc03338c589ff"} Feb 18 12:13:10 crc kubenswrapper[4922]: I0218 12:13:10.939799 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab"} Feb 18 12:13:10 crc kubenswrapper[4922]: I0218 12:13:10.939839 4922 scope.go:117] "RemoveContainer" containerID="8b1f76f76a4d90d637663bb4e07ac7219793473bf03151c5be72aa190a2db8df" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.160020 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4"] Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.162401 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.167508 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.168074 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.174625 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4"] Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.251915 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fefb3a87-d203-4ac1-b63d-61c582015132-secret-volume\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.252640 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fefb3a87-d203-4ac1-b63d-61c582015132-config-volume\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.253166 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcltl\" (UniqueName: \"kubernetes.io/projected/fefb3a87-d203-4ac1-b63d-61c582015132-kube-api-access-mcltl\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.355930 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fefb3a87-d203-4ac1-b63d-61c582015132-secret-volume\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.356139 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fefb3a87-d203-4ac1-b63d-61c582015132-config-volume\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.356185 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcltl\" (UniqueName: \"kubernetes.io/projected/fefb3a87-d203-4ac1-b63d-61c582015132-kube-api-access-mcltl\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.357792 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fefb3a87-d203-4ac1-b63d-61c582015132-config-volume\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.368256 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fefb3a87-d203-4ac1-b63d-61c582015132-secret-volume\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.379429 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcltl\" (UniqueName: \"kubernetes.io/projected/fefb3a87-d203-4ac1-b63d-61c582015132-kube-api-access-mcltl\") pod \"collect-profiles-29523615-grpz4\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.484827 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.961330 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4"] Feb 18 12:15:00 crc kubenswrapper[4922]: I0218 12:15:00.990113 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" event={"ID":"fefb3a87-d203-4ac1-b63d-61c582015132","Type":"ContainerStarted","Data":"28cef7f1097db2361d5e94108dcd6c6a37119408cbb2f95de09d7b9bf0a7695b"} Feb 18 12:15:02 crc kubenswrapper[4922]: I0218 12:15:02.003298 4922 generic.go:334] "Generic (PLEG): container finished" podID="fefb3a87-d203-4ac1-b63d-61c582015132" containerID="5b401b8ee4f7943af0a7b7807634c73c9cc5371f7bb8ea18f378db7de3390a99" exitCode=0 Feb 18 12:15:02 crc kubenswrapper[4922]: I0218 12:15:02.003502 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" event={"ID":"fefb3a87-d203-4ac1-b63d-61c582015132","Type":"ContainerDied","Data":"5b401b8ee4f7943af0a7b7807634c73c9cc5371f7bb8ea18f378db7de3390a99"} Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.358119 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.430749 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fefb3a87-d203-4ac1-b63d-61c582015132-config-volume\") pod \"fefb3a87-d203-4ac1-b63d-61c582015132\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.430931 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fefb3a87-d203-4ac1-b63d-61c582015132-secret-volume\") pod \"fefb3a87-d203-4ac1-b63d-61c582015132\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.430969 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcltl\" (UniqueName: \"kubernetes.io/projected/fefb3a87-d203-4ac1-b63d-61c582015132-kube-api-access-mcltl\") pod \"fefb3a87-d203-4ac1-b63d-61c582015132\" (UID: \"fefb3a87-d203-4ac1-b63d-61c582015132\") " Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.432592 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fefb3a87-d203-4ac1-b63d-61c582015132-config-volume" (OuterVolumeSpecName: "config-volume") pod "fefb3a87-d203-4ac1-b63d-61c582015132" (UID: "fefb3a87-d203-4ac1-b63d-61c582015132"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.439946 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fefb3a87-d203-4ac1-b63d-61c582015132-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fefb3a87-d203-4ac1-b63d-61c582015132" (UID: "fefb3a87-d203-4ac1-b63d-61c582015132"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.440060 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fefb3a87-d203-4ac1-b63d-61c582015132-kube-api-access-mcltl" (OuterVolumeSpecName: "kube-api-access-mcltl") pod "fefb3a87-d203-4ac1-b63d-61c582015132" (UID: "fefb3a87-d203-4ac1-b63d-61c582015132"). InnerVolumeSpecName "kube-api-access-mcltl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.533238 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fefb3a87-d203-4ac1-b63d-61c582015132-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.533274 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fefb3a87-d203-4ac1-b63d-61c582015132-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:03 crc kubenswrapper[4922]: I0218 12:15:03.533283 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcltl\" (UniqueName: \"kubernetes.io/projected/fefb3a87-d203-4ac1-b63d-61c582015132-kube-api-access-mcltl\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:04 crc kubenswrapper[4922]: I0218 12:15:04.023460 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" event={"ID":"fefb3a87-d203-4ac1-b63d-61c582015132","Type":"ContainerDied","Data":"28cef7f1097db2361d5e94108dcd6c6a37119408cbb2f95de09d7b9bf0a7695b"} Feb 18 12:15:04 crc kubenswrapper[4922]: I0218 12:15:04.023508 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28cef7f1097db2361d5e94108dcd6c6a37119408cbb2f95de09d7b9bf0a7695b" Feb 18 12:15:04 crc kubenswrapper[4922]: I0218 12:15:04.023521 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4" Feb 18 12:15:04 crc kubenswrapper[4922]: I0218 12:15:04.438027 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv"] Feb 18 12:15:04 crc kubenswrapper[4922]: I0218 12:15:04.446931 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523570-k54zv"] Feb 18 12:15:04 crc kubenswrapper[4922]: I0218 12:15:04.988695 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c707c4-5c62-438f-8312-2307d3ef0ba8" path="/var/lib/kubelet/pods/75c707c4-5c62-438f-8312-2307d3ef0ba8/volumes" Feb 18 12:15:12 crc kubenswrapper[4922]: I0218 12:15:12.878602 4922 scope.go:117] "RemoveContainer" containerID="28b5d335d68c0542326e75d8138276312eb3264af281f35e13ca715cee393fd1" Feb 18 12:15:39 crc kubenswrapper[4922]: I0218 12:15:39.807080 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:15:39 crc kubenswrapper[4922]: I0218 12:15:39.808694 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:16:09 crc kubenswrapper[4922]: I0218 12:16:09.808111 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:16:09 crc kubenswrapper[4922]: I0218 12:16:09.808764 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:16:39 crc kubenswrapper[4922]: I0218 12:16:39.808119 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:16:39 crc kubenswrapper[4922]: I0218 12:16:39.808698 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:16:39 crc kubenswrapper[4922]: I0218 12:16:39.808758 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:16:39 crc kubenswrapper[4922]: I0218 12:16:39.809570 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:16:39 crc kubenswrapper[4922]: I0218 12:16:39.809707 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" gracePeriod=600 Feb 18 12:16:39 crc kubenswrapper[4922]: E0218 12:16:39.945404 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:16:40 crc kubenswrapper[4922]: I0218 12:16:40.915810 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" exitCode=0 Feb 18 12:16:40 crc kubenswrapper[4922]: I0218 12:16:40.916119 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab"} Feb 18 12:16:40 crc kubenswrapper[4922]: I0218 12:16:40.916151 4922 scope.go:117] "RemoveContainer" containerID="f96eabfecb9bc217d8636beb3db027fe21b0aa914ece8706e84fc03338c589ff" Feb 18 12:16:40 crc kubenswrapper[4922]: I0218 12:16:40.916822 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:16:40 crc kubenswrapper[4922]: E0218 12:16:40.917075 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:16:53 crc kubenswrapper[4922]: I0218 12:16:53.973695 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:16:53 crc kubenswrapper[4922]: E0218 12:16:53.974768 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:17:07 crc kubenswrapper[4922]: I0218 12:17:07.973979 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:17:07 crc kubenswrapper[4922]: E0218 12:17:07.975721 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:17:21 crc kubenswrapper[4922]: I0218 12:17:21.974671 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:17:21 crc kubenswrapper[4922]: E0218 12:17:21.975543 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.326965 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8vqh2"] Feb 18 12:17:25 crc kubenswrapper[4922]: E0218 12:17:25.327836 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefb3a87-d203-4ac1-b63d-61c582015132" containerName="collect-profiles" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.327856 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefb3a87-d203-4ac1-b63d-61c582015132" containerName="collect-profiles" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.329026 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefb3a87-d203-4ac1-b63d-61c582015132" containerName="collect-profiles" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.330700 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.350647 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vqh2"] Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.377379 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-catalog-content\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.377802 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkr4g\" (UniqueName: \"kubernetes.io/projected/526949a6-53f6-4b36-b4ec-48a4a8b612e9-kube-api-access-dkr4g\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.377883 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-utilities\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.481814 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkr4g\" (UniqueName: \"kubernetes.io/projected/526949a6-53f6-4b36-b4ec-48a4a8b612e9-kube-api-access-dkr4g\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.481897 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-utilities\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.481981 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-catalog-content\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.482592 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-catalog-content\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.482875 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-utilities\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.507321 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkr4g\" (UniqueName: \"kubernetes.io/projected/526949a6-53f6-4b36-b4ec-48a4a8b612e9-kube-api-access-dkr4g\") pod \"community-operators-8vqh2\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:25 crc kubenswrapper[4922]: I0218 12:17:25.654443 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:26 crc kubenswrapper[4922]: I0218 12:17:26.248578 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vqh2"] Feb 18 12:17:26 crc kubenswrapper[4922]: I0218 12:17:26.351482 4922 generic.go:334] "Generic (PLEG): container finished" podID="7d136111-09bf-46fe-aaf8-868a27741f9b" containerID="004c81ea37bbcda1faf27f9cf0255e5a647f141e3451153704351a8c28aa6714" exitCode=0 Feb 18 12:17:26 crc kubenswrapper[4922]: I0218 12:17:26.351564 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" event={"ID":"7d136111-09bf-46fe-aaf8-868a27741f9b","Type":"ContainerDied","Data":"004c81ea37bbcda1faf27f9cf0255e5a647f141e3451153704351a8c28aa6714"} Feb 18 12:17:26 crc kubenswrapper[4922]: I0218 12:17:26.391730 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqh2" event={"ID":"526949a6-53f6-4b36-b4ec-48a4a8b612e9","Type":"ContainerStarted","Data":"01c732102ca2fd46bdb631cd0a64430ba5aa0394d12e4a6da819849d4a2e5c11"} Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.404062 4922 generic.go:334] "Generic (PLEG): container finished" podID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerID="0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8" exitCode=0 Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.404188 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqh2" event={"ID":"526949a6-53f6-4b36-b4ec-48a4a8b612e9","Type":"ContainerDied","Data":"0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8"} Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.412334 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.828897 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.956089 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-inventory\") pod \"7d136111-09bf-46fe-aaf8-868a27741f9b\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.956710 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-secret-0\") pod \"7d136111-09bf-46fe-aaf8-868a27741f9b\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.956810 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-ssh-key-openstack-edpm-ipam\") pod \"7d136111-09bf-46fe-aaf8-868a27741f9b\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.956911 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-combined-ca-bundle\") pod \"7d136111-09bf-46fe-aaf8-868a27741f9b\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.957114 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6h8x\" (UniqueName: \"kubernetes.io/projected/7d136111-09bf-46fe-aaf8-868a27741f9b-kube-api-access-c6h8x\") pod \"7d136111-09bf-46fe-aaf8-868a27741f9b\" (UID: \"7d136111-09bf-46fe-aaf8-868a27741f9b\") " Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.963798 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7d136111-09bf-46fe-aaf8-868a27741f9b" (UID: "7d136111-09bf-46fe-aaf8-868a27741f9b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.964478 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d136111-09bf-46fe-aaf8-868a27741f9b-kube-api-access-c6h8x" (OuterVolumeSpecName: "kube-api-access-c6h8x") pod "7d136111-09bf-46fe-aaf8-868a27741f9b" (UID: "7d136111-09bf-46fe-aaf8-868a27741f9b"). InnerVolumeSpecName "kube-api-access-c6h8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.992853 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-inventory" (OuterVolumeSpecName: "inventory") pod "7d136111-09bf-46fe-aaf8-868a27741f9b" (UID: "7d136111-09bf-46fe-aaf8-868a27741f9b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:17:27 crc kubenswrapper[4922]: I0218 12:17:27.994922 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7d136111-09bf-46fe-aaf8-868a27741f9b" (UID: "7d136111-09bf-46fe-aaf8-868a27741f9b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.006882 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7d136111-09bf-46fe-aaf8-868a27741f9b" (UID: "7d136111-09bf-46fe-aaf8-868a27741f9b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.061294 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.061353 4922 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.061395 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6h8x\" (UniqueName: \"kubernetes.io/projected/7d136111-09bf-46fe-aaf8-868a27741f9b-kube-api-access-c6h8x\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.061413 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.061426 4922 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7d136111-09bf-46fe-aaf8-868a27741f9b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.422859 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" event={"ID":"7d136111-09bf-46fe-aaf8-868a27741f9b","Type":"ContainerDied","Data":"d334758ef7c460ce4bd03b306cede9ac406be6bf21f14ff74b552e5a162c62ec"} Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.422911 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d334758ef7c460ce4bd03b306cede9ac406be6bf21f14ff74b552e5a162c62ec" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.422948 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.506774 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7"] Feb 18 12:17:28 crc kubenswrapper[4922]: E0218 12:17:28.507337 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d136111-09bf-46fe-aaf8-868a27741f9b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.507410 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d136111-09bf-46fe-aaf8-868a27741f9b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.507698 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d136111-09bf-46fe-aaf8-868a27741f9b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.508622 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.517013 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.517435 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.517561 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.517699 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.517830 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.518012 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.518137 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.528279 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7"] Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.573798 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.573895 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.573929 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.574313 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.574509 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.574572 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rprl\" (UniqueName: \"kubernetes.io/projected/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-kube-api-access-9rprl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.574719 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.574853 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.574889 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.574926 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.575038 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.676828 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.676930 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.676973 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rprl\" (UniqueName: \"kubernetes.io/projected/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-kube-api-access-9rprl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.677019 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.677078 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.677105 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.677144 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.677191 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.677242 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.677287 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.677323 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.678165 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.682869 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.682880 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.683123 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.684529 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.684674 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.684906 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.685493 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.686591 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.688101 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.702291 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rprl\" (UniqueName: \"kubernetes.io/projected/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-kube-api-access-9rprl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hswp7\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:28 crc kubenswrapper[4922]: I0218 12:17:28.831519 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:17:29 crc kubenswrapper[4922]: I0218 12:17:29.375512 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7"] Feb 18 12:17:29 crc kubenswrapper[4922]: I0218 12:17:29.432569 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" event={"ID":"6e9e482a-c85e-473f-b848-e6fb6ba6afcd","Type":"ContainerStarted","Data":"b802ddd617fe0e8d5f19f836561f6f6d8d1e7e23231f318980afcb28d7a13059"} Feb 18 12:17:31 crc kubenswrapper[4922]: I0218 12:17:31.507763 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" event={"ID":"6e9e482a-c85e-473f-b848-e6fb6ba6afcd","Type":"ContainerStarted","Data":"2adb9429bcb3ef41bd775beaaf3c6d68c44fa008e3e767ad5a7534d6bcd41e78"} Feb 18 12:17:32 crc kubenswrapper[4922]: I0218 12:17:32.519604 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqh2" event={"ID":"526949a6-53f6-4b36-b4ec-48a4a8b612e9","Type":"ContainerStarted","Data":"a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee"} Feb 18 12:17:32 crc kubenswrapper[4922]: I0218 12:17:32.546872 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" podStartSLOduration=3.734466121 podStartE2EDuration="4.546843325s" podCreationTimestamp="2026-02-18 12:17:28 +0000 UTC" firstStartedPulling="2026-02-18 12:17:29.387037443 +0000 UTC m=+2451.114741523" lastFinishedPulling="2026-02-18 12:17:30.199414647 +0000 UTC m=+2451.927118727" observedRunningTime="2026-02-18 12:17:31.536495091 +0000 UTC m=+2453.264199171" watchObservedRunningTime="2026-02-18 12:17:32.546843325 +0000 UTC m=+2454.274547435" Feb 18 12:17:34 crc kubenswrapper[4922]: I0218 12:17:34.543714 4922 generic.go:334] "Generic (PLEG): container finished" podID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerID="a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee" exitCode=0 Feb 18 12:17:34 crc kubenswrapper[4922]: I0218 12:17:34.543767 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqh2" event={"ID":"526949a6-53f6-4b36-b4ec-48a4a8b612e9","Type":"ContainerDied","Data":"a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee"} Feb 18 12:17:35 crc kubenswrapper[4922]: I0218 12:17:35.555776 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqh2" event={"ID":"526949a6-53f6-4b36-b4ec-48a4a8b612e9","Type":"ContainerStarted","Data":"dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f"} Feb 18 12:17:35 crc kubenswrapper[4922]: I0218 12:17:35.575243 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8vqh2" podStartSLOduration=3.016852244 podStartE2EDuration="10.575222238s" podCreationTimestamp="2026-02-18 12:17:25 +0000 UTC" firstStartedPulling="2026-02-18 12:17:27.40963973 +0000 UTC m=+2449.137343810" lastFinishedPulling="2026-02-18 12:17:34.968009714 +0000 UTC m=+2456.695713804" observedRunningTime="2026-02-18 12:17:35.57369669 +0000 UTC m=+2457.301400790" watchObservedRunningTime="2026-02-18 12:17:35.575222238 +0000 UTC m=+2457.302926318" Feb 18 12:17:35 crc kubenswrapper[4922]: I0218 12:17:35.655257 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:35 crc kubenswrapper[4922]: I0218 12:17:35.655342 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:35 crc kubenswrapper[4922]: I0218 12:17:35.972822 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:17:35 crc kubenswrapper[4922]: E0218 12:17:35.973501 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:17:36 crc kubenswrapper[4922]: I0218 12:17:36.709799 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8vqh2" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="registry-server" probeResult="failure" output=< Feb 18 12:17:36 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:17:36 crc kubenswrapper[4922]: > Feb 18 12:17:45 crc kubenswrapper[4922]: I0218 12:17:45.708040 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:45 crc kubenswrapper[4922]: I0218 12:17:45.762466 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:45 crc kubenswrapper[4922]: I0218 12:17:45.949311 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vqh2"] Feb 18 12:17:47 crc kubenswrapper[4922]: I0218 12:17:47.657068 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8vqh2" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="registry-server" containerID="cri-o://dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f" gracePeriod=2 Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.170514 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.335994 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-utilities\") pod \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.336209 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkr4g\" (UniqueName: \"kubernetes.io/projected/526949a6-53f6-4b36-b4ec-48a4a8b612e9-kube-api-access-dkr4g\") pod \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.336280 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-catalog-content\") pod \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\" (UID: \"526949a6-53f6-4b36-b4ec-48a4a8b612e9\") " Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.336787 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-utilities" (OuterVolumeSpecName: "utilities") pod "526949a6-53f6-4b36-b4ec-48a4a8b612e9" (UID: "526949a6-53f6-4b36-b4ec-48a4a8b612e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.337231 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.343691 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526949a6-53f6-4b36-b4ec-48a4a8b612e9-kube-api-access-dkr4g" (OuterVolumeSpecName: "kube-api-access-dkr4g") pod "526949a6-53f6-4b36-b4ec-48a4a8b612e9" (UID: "526949a6-53f6-4b36-b4ec-48a4a8b612e9"). InnerVolumeSpecName "kube-api-access-dkr4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.392900 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "526949a6-53f6-4b36-b4ec-48a4a8b612e9" (UID: "526949a6-53f6-4b36-b4ec-48a4a8b612e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.438621 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkr4g\" (UniqueName: \"kubernetes.io/projected/526949a6-53f6-4b36-b4ec-48a4a8b612e9-kube-api-access-dkr4g\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.438841 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/526949a6-53f6-4b36-b4ec-48a4a8b612e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.670174 4922 generic.go:334] "Generic (PLEG): container finished" podID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerID="dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f" exitCode=0 Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.670240 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqh2" event={"ID":"526949a6-53f6-4b36-b4ec-48a4a8b612e9","Type":"ContainerDied","Data":"dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f"} Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.670271 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vqh2" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.670623 4922 scope.go:117] "RemoveContainer" containerID="dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.670584 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vqh2" event={"ID":"526949a6-53f6-4b36-b4ec-48a4a8b612e9","Type":"ContainerDied","Data":"01c732102ca2fd46bdb631cd0a64430ba5aa0394d12e4a6da819849d4a2e5c11"} Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.699923 4922 scope.go:117] "RemoveContainer" containerID="a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.719394 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vqh2"] Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.728162 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8vqh2"] Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.741222 4922 scope.go:117] "RemoveContainer" containerID="0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.805560 4922 scope.go:117] "RemoveContainer" containerID="dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f" Feb 18 12:17:48 crc kubenswrapper[4922]: E0218 12:17:48.809475 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f\": container with ID starting with dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f not found: ID does not exist" containerID="dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.809513 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f"} err="failed to get container status \"dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f\": rpc error: code = NotFound desc = could not find container \"dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f\": container with ID starting with dc982e8e0ad2018b62dcf843753a3126bfdb6d8b3e07965135a9f90ed93bc56f not found: ID does not exist" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.809536 4922 scope.go:117] "RemoveContainer" containerID="a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee" Feb 18 12:17:48 crc kubenswrapper[4922]: E0218 12:17:48.817496 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee\": container with ID starting with a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee not found: ID does not exist" containerID="a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.817531 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee"} err="failed to get container status \"a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee\": rpc error: code = NotFound desc = could not find container \"a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee\": container with ID starting with a92dee8aeb759eec962acd64c4a337ee288c07c686cfa1ad8f7c4d73cb7393ee not found: ID does not exist" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.817557 4922 scope.go:117] "RemoveContainer" containerID="0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8" Feb 18 12:17:48 crc kubenswrapper[4922]: E0218 12:17:48.824497 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8\": container with ID starting with 0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8 not found: ID does not exist" containerID="0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.824544 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8"} err="failed to get container status \"0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8\": rpc error: code = NotFound desc = could not find container \"0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8\": container with ID starting with 0a1c454e5a730d36b5ea73c9d1cecd08af9d5ee5f5c304af6fb52d0ccce6cbb8 not found: ID does not exist" Feb 18 12:17:48 crc kubenswrapper[4922]: I0218 12:17:48.985091 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" path="/var/lib/kubelet/pods/526949a6-53f6-4b36-b4ec-48a4a8b612e9/volumes" Feb 18 12:17:49 crc kubenswrapper[4922]: I0218 12:17:49.973148 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:17:49 crc kubenswrapper[4922]: E0218 12:17:49.973531 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:18:03 crc kubenswrapper[4922]: I0218 12:18:03.973536 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:18:03 crc kubenswrapper[4922]: E0218 12:18:03.974163 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:18:15 crc kubenswrapper[4922]: I0218 12:18:15.973344 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:18:15 crc kubenswrapper[4922]: E0218 12:18:15.974259 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:18:29 crc kubenswrapper[4922]: I0218 12:18:29.004489 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:18:29 crc kubenswrapper[4922]: E0218 12:18:29.006286 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:18:42 crc kubenswrapper[4922]: I0218 12:18:42.975022 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:18:42 crc kubenswrapper[4922]: E0218 12:18:42.975931 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:18:53 crc kubenswrapper[4922]: I0218 12:18:53.974297 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:18:53 crc kubenswrapper[4922]: E0218 12:18:53.975524 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:19:05 crc kubenswrapper[4922]: I0218 12:19:05.973540 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:19:05 crc kubenswrapper[4922]: E0218 12:19:05.974353 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:19:17 crc kubenswrapper[4922]: I0218 12:19:17.973169 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:19:17 crc kubenswrapper[4922]: E0218 12:19:17.973953 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:19:31 crc kubenswrapper[4922]: I0218 12:19:31.973978 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:19:31 crc kubenswrapper[4922]: E0218 12:19:31.974864 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:19:42 crc kubenswrapper[4922]: I0218 12:19:42.973298 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:19:42 crc kubenswrapper[4922]: E0218 12:19:42.974123 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:19:51 crc kubenswrapper[4922]: I0218 12:19:51.783887 4922 generic.go:334] "Generic (PLEG): container finished" podID="6e9e482a-c85e-473f-b848-e6fb6ba6afcd" containerID="2adb9429bcb3ef41bd775beaaf3c6d68c44fa008e3e767ad5a7534d6bcd41e78" exitCode=0 Feb 18 12:19:51 crc kubenswrapper[4922]: I0218 12:19:51.783976 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" event={"ID":"6e9e482a-c85e-473f-b848-e6fb6ba6afcd","Type":"ContainerDied","Data":"2adb9429bcb3ef41bd775beaaf3c6d68c44fa008e3e767ad5a7534d6bcd41e78"} Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.234916 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.354662 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-extra-config-0\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.354718 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-1\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.354755 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-1\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.354799 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-2\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.354870 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-0\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.354941 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-ssh-key-openstack-edpm-ipam\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.355007 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-0\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.355034 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-combined-ca-bundle\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.355099 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rprl\" (UniqueName: \"kubernetes.io/projected/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-kube-api-access-9rprl\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.355128 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-3\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.355162 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-inventory\") pod \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\" (UID: \"6e9e482a-c85e-473f-b848-e6fb6ba6afcd\") " Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.362688 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.366589 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-kube-api-access-9rprl" (OuterVolumeSpecName: "kube-api-access-9rprl") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "kube-api-access-9rprl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.392220 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.396261 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.397776 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.408703 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.411087 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.413725 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.414445 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-inventory" (OuterVolumeSpecName: "inventory") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.417180 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.420575 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6e9e482a-c85e-473f-b848-e6fb6ba6afcd" (UID: "6e9e482a-c85e-473f-b848-e6fb6ba6afcd"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457299 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457338 4922 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457354 4922 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457386 4922 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457398 4922 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457410 4922 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457421 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457432 4922 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457441 4922 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457452 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rprl\" (UniqueName: \"kubernetes.io/projected/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-kube-api-access-9rprl\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.457462 4922 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6e9e482a-c85e-473f-b848-e6fb6ba6afcd-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.810788 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" event={"ID":"6e9e482a-c85e-473f-b848-e6fb6ba6afcd","Type":"ContainerDied","Data":"b802ddd617fe0e8d5f19f836561f6f6d8d1e7e23231f318980afcb28d7a13059"} Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.810845 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b802ddd617fe0e8d5f19f836561f6f6d8d1e7e23231f318980afcb28d7a13059" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.810903 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hswp7" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.929144 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs"] Feb 18 12:19:53 crc kubenswrapper[4922]: E0218 12:19:53.929712 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="extract-utilities" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.929737 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="extract-utilities" Feb 18 12:19:53 crc kubenswrapper[4922]: E0218 12:19:53.929780 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9e482a-c85e-473f-b848-e6fb6ba6afcd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.929789 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9e482a-c85e-473f-b848-e6fb6ba6afcd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:53 crc kubenswrapper[4922]: E0218 12:19:53.929808 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="extract-content" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.929818 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="extract-content" Feb 18 12:19:53 crc kubenswrapper[4922]: E0218 12:19:53.929829 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="registry-server" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.929837 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="registry-server" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.930050 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="526949a6-53f6-4b36-b4ec-48a4a8b612e9" containerName="registry-server" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.930077 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9e482a-c85e-473f-b848-e6fb6ba6afcd" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.931002 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.934153 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.934349 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.934536 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.934647 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8fsfv" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.934765 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.965423 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.965540 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.965564 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxrx\" (UniqueName: \"kubernetes.io/projected/0c5871a2-bb79-4b43-a830-7714fa7d8241-kube-api-access-2qxrx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.965586 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.965642 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.965669 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.965722 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.966946 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs"] Feb 18 12:19:53 crc kubenswrapper[4922]: I0218 12:19:53.974229 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:19:53 crc kubenswrapper[4922]: E0218 12:19:53.974474 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.067609 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.067801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.067946 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.067975 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxrx\" (UniqueName: \"kubernetes.io/projected/0c5871a2-bb79-4b43-a830-7714fa7d8241-kube-api-access-2qxrx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.068007 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.068128 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.068166 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.077702 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.084008 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.085064 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.088920 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.095032 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.103309 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.106307 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxrx\" (UniqueName: \"kubernetes.io/projected/0c5871a2-bb79-4b43-a830-7714fa7d8241-kube-api-access-2qxrx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-57sjs\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.246786 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.738284 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs"] Feb 18 12:19:54 crc kubenswrapper[4922]: I0218 12:19:54.823317 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" event={"ID":"0c5871a2-bb79-4b43-a830-7714fa7d8241","Type":"ContainerStarted","Data":"0f088979f73251d390e5b8a547c861aea9ecc20d03724e3ee0291b2f23342cde"} Feb 18 12:19:55 crc kubenswrapper[4922]: I0218 12:19:55.834299 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" event={"ID":"0c5871a2-bb79-4b43-a830-7714fa7d8241","Type":"ContainerStarted","Data":"2b647db5b11649df8cf1b62b53fef9fdce9f9caa2ea6827a8ff6a1eafa1b40eb"} Feb 18 12:19:55 crc kubenswrapper[4922]: I0218 12:19:55.864195 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" podStartSLOduration=2.0672184319999998 podStartE2EDuration="2.864156741s" podCreationTimestamp="2026-02-18 12:19:53 +0000 UTC" firstStartedPulling="2026-02-18 12:19:54.743930003 +0000 UTC m=+2596.471634073" lastFinishedPulling="2026-02-18 12:19:55.540868312 +0000 UTC m=+2597.268572382" observedRunningTime="2026-02-18 12:19:55.851043859 +0000 UTC m=+2597.578747949" watchObservedRunningTime="2026-02-18 12:19:55.864156741 +0000 UTC m=+2597.591860821" Feb 18 12:20:07 crc kubenswrapper[4922]: I0218 12:20:07.974156 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:20:07 crc kubenswrapper[4922]: E0218 12:20:07.975604 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:20:19 crc kubenswrapper[4922]: I0218 12:20:19.973124 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:20:19 crc kubenswrapper[4922]: E0218 12:20:19.973824 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:20:34 crc kubenswrapper[4922]: I0218 12:20:34.973585 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:20:34 crc kubenswrapper[4922]: E0218 12:20:34.974278 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:20:45 crc kubenswrapper[4922]: I0218 12:20:45.973634 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:20:45 crc kubenswrapper[4922]: E0218 12:20:45.974463 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:20:58 crc kubenswrapper[4922]: I0218 12:20:58.981413 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:20:58 crc kubenswrapper[4922]: E0218 12:20:58.982190 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:21:10 crc kubenswrapper[4922]: I0218 12:21:10.973727 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:21:10 crc kubenswrapper[4922]: E0218 12:21:10.974677 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:21:23 crc kubenswrapper[4922]: I0218 12:21:23.973794 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:21:23 crc kubenswrapper[4922]: E0218 12:21:23.974626 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:21:35 crc kubenswrapper[4922]: I0218 12:21:35.975120 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:21:35 crc kubenswrapper[4922]: E0218 12:21:35.976098 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:21:43 crc kubenswrapper[4922]: I0218 12:21:43.755543 4922 generic.go:334] "Generic (PLEG): container finished" podID="0c5871a2-bb79-4b43-a830-7714fa7d8241" containerID="2b647db5b11649df8cf1b62b53fef9fdce9f9caa2ea6827a8ff6a1eafa1b40eb" exitCode=0 Feb 18 12:21:43 crc kubenswrapper[4922]: I0218 12:21:43.755630 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" event={"ID":"0c5871a2-bb79-4b43-a830-7714fa7d8241","Type":"ContainerDied","Data":"2b647db5b11649df8cf1b62b53fef9fdce9f9caa2ea6827a8ff6a1eafa1b40eb"} Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.241164 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.287494 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-inventory\") pod \"0c5871a2-bb79-4b43-a830-7714fa7d8241\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.287842 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qxrx\" (UniqueName: \"kubernetes.io/projected/0c5871a2-bb79-4b43-a830-7714fa7d8241-kube-api-access-2qxrx\") pod \"0c5871a2-bb79-4b43-a830-7714fa7d8241\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.287873 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-0\") pod \"0c5871a2-bb79-4b43-a830-7714fa7d8241\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.287895 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-telemetry-combined-ca-bundle\") pod \"0c5871a2-bb79-4b43-a830-7714fa7d8241\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.287913 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-2\") pod \"0c5871a2-bb79-4b43-a830-7714fa7d8241\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.288025 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ssh-key-openstack-edpm-ipam\") pod \"0c5871a2-bb79-4b43-a830-7714fa7d8241\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.288057 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-1\") pod \"0c5871a2-bb79-4b43-a830-7714fa7d8241\" (UID: \"0c5871a2-bb79-4b43-a830-7714fa7d8241\") " Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.293576 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0c5871a2-bb79-4b43-a830-7714fa7d8241" (UID: "0c5871a2-bb79-4b43-a830-7714fa7d8241"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.311308 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5871a2-bb79-4b43-a830-7714fa7d8241-kube-api-access-2qxrx" (OuterVolumeSpecName: "kube-api-access-2qxrx") pod "0c5871a2-bb79-4b43-a830-7714fa7d8241" (UID: "0c5871a2-bb79-4b43-a830-7714fa7d8241"). InnerVolumeSpecName "kube-api-access-2qxrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.316020 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "0c5871a2-bb79-4b43-a830-7714fa7d8241" (UID: "0c5871a2-bb79-4b43-a830-7714fa7d8241"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.317182 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-inventory" (OuterVolumeSpecName: "inventory") pod "0c5871a2-bb79-4b43-a830-7714fa7d8241" (UID: "0c5871a2-bb79-4b43-a830-7714fa7d8241"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.318809 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "0c5871a2-bb79-4b43-a830-7714fa7d8241" (UID: "0c5871a2-bb79-4b43-a830-7714fa7d8241"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.321382 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "0c5871a2-bb79-4b43-a830-7714fa7d8241" (UID: "0c5871a2-bb79-4b43-a830-7714fa7d8241"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.323870 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c5871a2-bb79-4b43-a830-7714fa7d8241" (UID: "0c5871a2-bb79-4b43-a830-7714fa7d8241"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.393056 4922 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.393197 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qxrx\" (UniqueName: \"kubernetes.io/projected/0c5871a2-bb79-4b43-a830-7714fa7d8241-kube-api-access-2qxrx\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.393285 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.393354 4922 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.393445 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.393507 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.393570 4922 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0c5871a2-bb79-4b43-a830-7714fa7d8241-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.778187 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" event={"ID":"0c5871a2-bb79-4b43-a830-7714fa7d8241","Type":"ContainerDied","Data":"0f088979f73251d390e5b8a547c861aea9ecc20d03724e3ee0291b2f23342cde"} Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.778227 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f088979f73251d390e5b8a547c861aea9ecc20d03724e3ee0291b2f23342cde" Feb 18 12:21:45 crc kubenswrapper[4922]: I0218 12:21:45.778282 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-57sjs" Feb 18 12:21:46 crc kubenswrapper[4922]: I0218 12:21:46.975596 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:21:47 crc kubenswrapper[4922]: I0218 12:21:47.810471 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"ef51f81fc06c6367e33199b91ef56039257473dad48437b29bc57a1983bf05c7"} Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.214771 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x6t7b"] Feb 18 12:22:22 crc kubenswrapper[4922]: E0218 12:22:22.217626 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5871a2-bb79-4b43-a830-7714fa7d8241" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.217651 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5871a2-bb79-4b43-a830-7714fa7d8241" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.217834 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5871a2-bb79-4b43-a830-7714fa7d8241" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.219283 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.235202 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6t7b"] Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.241196 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-utilities\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.241298 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-catalog-content\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.241437 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw68x\" (UniqueName: \"kubernetes.io/projected/6217a40d-f959-4afa-b48e-b25c1c1693c1-kube-api-access-rw68x\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.342766 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-utilities\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.342837 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-catalog-content\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.342870 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw68x\" (UniqueName: \"kubernetes.io/projected/6217a40d-f959-4afa-b48e-b25c1c1693c1-kube-api-access-rw68x\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.343298 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-utilities\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.343437 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-catalog-content\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.365003 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw68x\" (UniqueName: \"kubernetes.io/projected/6217a40d-f959-4afa-b48e-b25c1c1693c1-kube-api-access-rw68x\") pod \"redhat-operators-x6t7b\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:22 crc kubenswrapper[4922]: I0218 12:22:22.583684 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:23 crc kubenswrapper[4922]: I0218 12:22:23.083920 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6t7b"] Feb 18 12:22:23 crc kubenswrapper[4922]: I0218 12:22:23.105172 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6t7b" event={"ID":"6217a40d-f959-4afa-b48e-b25c1c1693c1","Type":"ContainerStarted","Data":"01304453471ec40644ea6059952df8e463fbece81357fde7a770b3318d0244bd"} Feb 18 12:22:24 crc kubenswrapper[4922]: I0218 12:22:24.115070 4922 generic.go:334] "Generic (PLEG): container finished" podID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerID="13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296" exitCode=0 Feb 18 12:22:24 crc kubenswrapper[4922]: I0218 12:22:24.115124 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6t7b" event={"ID":"6217a40d-f959-4afa-b48e-b25c1c1693c1","Type":"ContainerDied","Data":"13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296"} Feb 18 12:22:25 crc kubenswrapper[4922]: I0218 12:22:25.125895 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6t7b" event={"ID":"6217a40d-f959-4afa-b48e-b25c1c1693c1","Type":"ContainerStarted","Data":"cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227"} Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.594036 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gmwj8"] Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.596272 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.605585 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmwj8"] Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.661125 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz4fm\" (UniqueName: \"kubernetes.io/projected/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-kube-api-access-nz4fm\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.661171 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-catalog-content\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.661281 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-utilities\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.763417 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz4fm\" (UniqueName: \"kubernetes.io/projected/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-kube-api-access-nz4fm\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.763477 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-catalog-content\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.763579 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-utilities\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.764170 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-utilities\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.764203 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-catalog-content\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.783260 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz4fm\" (UniqueName: \"kubernetes.io/projected/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-kube-api-access-nz4fm\") pod \"redhat-marketplace-gmwj8\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:27 crc kubenswrapper[4922]: I0218 12:22:27.915202 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:28 crc kubenswrapper[4922]: I0218 12:22:28.440619 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmwj8"] Feb 18 12:22:29 crc kubenswrapper[4922]: E0218 12:22:29.051568 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6217a40d_f959_4afa_b48e_b25c1c1693c1.slice/crio-conmon-cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227.scope\": RecentStats: unable to find data in memory cache]" Feb 18 12:22:29 crc kubenswrapper[4922]: I0218 12:22:29.166463 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmwj8" event={"ID":"d2f2ec0a-16c2-4808-8871-a8e56bd045a9","Type":"ContainerStarted","Data":"5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45"} Feb 18 12:22:29 crc kubenswrapper[4922]: I0218 12:22:29.166517 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmwj8" event={"ID":"d2f2ec0a-16c2-4808-8871-a8e56bd045a9","Type":"ContainerStarted","Data":"c172eba0a6e88b586259ce4d77fcba9de3974f0deeaae0baf659acb7a41aaa60"} Feb 18 12:22:29 crc kubenswrapper[4922]: I0218 12:22:29.169139 4922 generic.go:334] "Generic (PLEG): container finished" podID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerID="cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227" exitCode=0 Feb 18 12:22:29 crc kubenswrapper[4922]: I0218 12:22:29.169195 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6t7b" event={"ID":"6217a40d-f959-4afa-b48e-b25c1c1693c1","Type":"ContainerDied","Data":"cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227"} Feb 18 12:22:29 crc kubenswrapper[4922]: I0218 12:22:29.184404 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:22:30 crc kubenswrapper[4922]: I0218 12:22:30.181443 4922 generic.go:334] "Generic (PLEG): container finished" podID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerID="5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45" exitCode=0 Feb 18 12:22:30 crc kubenswrapper[4922]: I0218 12:22:30.181670 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmwj8" event={"ID":"d2f2ec0a-16c2-4808-8871-a8e56bd045a9","Type":"ContainerDied","Data":"5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45"} Feb 18 12:22:31 crc kubenswrapper[4922]: I0218 12:22:31.191972 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmwj8" event={"ID":"d2f2ec0a-16c2-4808-8871-a8e56bd045a9","Type":"ContainerStarted","Data":"e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723"} Feb 18 12:22:31 crc kubenswrapper[4922]: I0218 12:22:31.196603 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6t7b" event={"ID":"6217a40d-f959-4afa-b48e-b25c1c1693c1","Type":"ContainerStarted","Data":"bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae"} Feb 18 12:22:31 crc kubenswrapper[4922]: I0218 12:22:31.237585 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x6t7b" podStartSLOduration=3.108707252 podStartE2EDuration="9.23756636s" podCreationTimestamp="2026-02-18 12:22:22 +0000 UTC" firstStartedPulling="2026-02-18 12:22:24.1189741 +0000 UTC m=+2745.846678180" lastFinishedPulling="2026-02-18 12:22:30.247833208 +0000 UTC m=+2751.975537288" observedRunningTime="2026-02-18 12:22:31.22689851 +0000 UTC m=+2752.954602600" watchObservedRunningTime="2026-02-18 12:22:31.23756636 +0000 UTC m=+2752.965270440" Feb 18 12:22:32 crc kubenswrapper[4922]: I0218 12:22:32.208421 4922 generic.go:334] "Generic (PLEG): container finished" podID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerID="e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723" exitCode=0 Feb 18 12:22:32 crc kubenswrapper[4922]: I0218 12:22:32.208513 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmwj8" event={"ID":"d2f2ec0a-16c2-4808-8871-a8e56bd045a9","Type":"ContainerDied","Data":"e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723"} Feb 18 12:22:32 crc kubenswrapper[4922]: I0218 12:22:32.584353 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:32 crc kubenswrapper[4922]: I0218 12:22:32.584667 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:33 crc kubenswrapper[4922]: I0218 12:22:33.221223 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmwj8" event={"ID":"d2f2ec0a-16c2-4808-8871-a8e56bd045a9","Type":"ContainerStarted","Data":"824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a"} Feb 18 12:22:33 crc kubenswrapper[4922]: I0218 12:22:33.239986 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gmwj8" podStartSLOduration=3.736879006 podStartE2EDuration="6.239956925s" podCreationTimestamp="2026-02-18 12:22:27 +0000 UTC" firstStartedPulling="2026-02-18 12:22:30.183719923 +0000 UTC m=+2751.911424003" lastFinishedPulling="2026-02-18 12:22:32.686797842 +0000 UTC m=+2754.414501922" observedRunningTime="2026-02-18 12:22:33.236812115 +0000 UTC m=+2754.964516195" watchObservedRunningTime="2026-02-18 12:22:33.239956925 +0000 UTC m=+2754.967660995" Feb 18 12:22:33 crc kubenswrapper[4922]: I0218 12:22:33.630434 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x6t7b" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="registry-server" probeResult="failure" output=< Feb 18 12:22:33 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:22:33 crc kubenswrapper[4922]: > Feb 18 12:22:37 crc kubenswrapper[4922]: I0218 12:22:37.915796 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:37 crc kubenswrapper[4922]: I0218 12:22:37.917631 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:37 crc kubenswrapper[4922]: I0218 12:22:37.971190 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:38 crc kubenswrapper[4922]: I0218 12:22:38.318649 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:38 crc kubenswrapper[4922]: I0218 12:22:38.375118 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmwj8"] Feb 18 12:22:39 crc kubenswrapper[4922]: I0218 12:22:39.124440 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 12:22:39 crc kubenswrapper[4922]: I0218 12:22:39.124792 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="prometheus" containerID="cri-o://6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b" gracePeriod=600 Feb 18 12:22:39 crc kubenswrapper[4922]: I0218 12:22:39.124922 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="config-reloader" containerID="cri-o://491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c" gracePeriod=600 Feb 18 12:22:39 crc kubenswrapper[4922]: I0218 12:22:39.124906 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="thanos-sidecar" containerID="cri-o://a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b" gracePeriod=600 Feb 18 12:22:39 crc kubenswrapper[4922]: I0218 12:22:39.291562 4922 generic.go:334] "Generic (PLEG): container finished" podID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerID="a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b" exitCode=0 Feb 18 12:22:39 crc kubenswrapper[4922]: I0218 12:22:39.291851 4922 generic.go:334] "Generic (PLEG): container finished" podID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerID="6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b" exitCode=0 Feb 18 12:22:39 crc kubenswrapper[4922]: I0218 12:22:39.292671 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerDied","Data":"a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b"} Feb 18 12:22:39 crc kubenswrapper[4922]: I0218 12:22:39.292711 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerDied","Data":"6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b"} Feb 18 12:22:39 crc kubenswrapper[4922]: E0218 12:22:39.328344 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791eb2d2_e8b0_401d_8b8b_b79d675c1ca4.slice/crio-6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791eb2d2_e8b0_401d_8b8b_b79d675c1ca4.slice/crio-conmon-a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791eb2d2_e8b0_401d_8b8b_b79d675c1ca4.slice/crio-a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b.scope\": RecentStats: unable to find data in memory cache]" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.140212 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.303434 4922 generic.go:334] "Generic (PLEG): container finished" podID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerID="491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c" exitCode=0 Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.303533 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.303523 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerDied","Data":"491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c"} Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.303615 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4","Type":"ContainerDied","Data":"eb650b51fbe53331680115b5b916f11fa8f568a0682dba6d2801da306b88e53d"} Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.303655 4922 scope.go:117] "RemoveContainer" containerID="a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.303654 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gmwj8" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerName="registry-server" containerID="cri-o://824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a" gracePeriod=2 Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.329179 4922 scope.go:117] "RemoveContainer" containerID="491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335066 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-2\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335108 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-1\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335130 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335210 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp5t7\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-kube-api-access-vp5t7\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335246 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-secret-combined-ca-bundle\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335273 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-thanos-prometheus-http-client-file\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335322 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-0\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335343 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335403 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335426 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-tls-assets\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335447 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config-out\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335910 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.335961 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config\") pod \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\" (UID: \"791eb2d2-e8b0-401d-8b8b-b79d675c1ca4\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.337020 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.337116 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.337484 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.341640 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.343156 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config" (OuterVolumeSpecName: "config") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.343327 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.343458 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.344084 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.345973 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config-out" (OuterVolumeSpecName: "config-out") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.347702 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-kube-api-access-vp5t7" (OuterVolumeSpecName: "kube-api-access-vp5t7") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "kube-api-access-vp5t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.349673 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.359331 4922 scope.go:117] "RemoveContainer" containerID="6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.370810 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "pvc-15ee050c-37dd-43df-8cbe-1200f09a5545". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438295 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") on node \"crc\" " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438337 4922 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438353 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438462 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438489 4922 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438505 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp5t7\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-kube-api-access-vp5t7\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438518 4922 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438536 4922 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438551 4922 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438565 4922 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438579 4922 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.438762 4922 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.462856 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config" (OuterVolumeSpecName: "web-config") pod "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" (UID: "791eb2d2-e8b0-401d-8b8b-b79d675c1ca4"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.511154 4922 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.512158 4922 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15ee050c-37dd-43df-8cbe-1200f09a5545" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545") on node "crc" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.540514 4922 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.540549 4922 reconciler_common.go:293] "Volume detached for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.613652 4922 scope.go:117] "RemoveContainer" containerID="66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.663579 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.668829 4922 scope.go:117] "RemoveContainer" containerID="a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.679201 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 12:22:40 crc kubenswrapper[4922]: E0218 12:22:40.692798 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b\": container with ID starting with a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b not found: ID does not exist" containerID="a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.692858 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b"} err="failed to get container status \"a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b\": rpc error: code = NotFound desc = could not find container \"a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b\": container with ID starting with a913d6036fa961f4da9057dcc5a6d40a6e4b3e35d77e43f7033ef4f740b7546b not found: ID does not exist" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.692890 4922 scope.go:117] "RemoveContainer" containerID="491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c" Feb 18 12:22:40 crc kubenswrapper[4922]: E0218 12:22:40.693403 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c\": container with ID starting with 491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c not found: ID does not exist" containerID="491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.693448 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c"} err="failed to get container status \"491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c\": rpc error: code = NotFound desc = could not find container \"491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c\": container with ID starting with 491ccef72e7d04d2ba0ad4f552d88e3b6a7d14b1901010114c09e2c9f1ceb17c not found: ID does not exist" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.693476 4922 scope.go:117] "RemoveContainer" containerID="6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b" Feb 18 12:22:40 crc kubenswrapper[4922]: E0218 12:22:40.693810 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b\": container with ID starting with 6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b not found: ID does not exist" containerID="6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.693847 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b"} err="failed to get container status \"6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b\": rpc error: code = NotFound desc = could not find container \"6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b\": container with ID starting with 6a87e12a9d544ccc469c240f591cfdfbc76c565f80e42da4d9b856e9a979562b not found: ID does not exist" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.693866 4922 scope.go:117] "RemoveContainer" containerID="66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8" Feb 18 12:22:40 crc kubenswrapper[4922]: E0218 12:22:40.697672 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8\": container with ID starting with 66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8 not found: ID does not exist" containerID="66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.697773 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8"} err="failed to get container status \"66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8\": rpc error: code = NotFound desc = could not find container \"66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8\": container with ID starting with 66d83f5860f3877f372e8e31496543d7009d3da9bb6e8967b9cf02ffa25c24f8 not found: ID does not exist" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.708716 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 12:22:40 crc kubenswrapper[4922]: E0218 12:22:40.709193 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="prometheus" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.709218 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="prometheus" Feb 18 12:22:40 crc kubenswrapper[4922]: E0218 12:22:40.709242 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="config-reloader" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.709251 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="config-reloader" Feb 18 12:22:40 crc kubenswrapper[4922]: E0218 12:22:40.709270 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="init-config-reloader" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.709280 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="init-config-reloader" Feb 18 12:22:40 crc kubenswrapper[4922]: E0218 12:22:40.709297 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="thanos-sidecar" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.709304 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="thanos-sidecar" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.709658 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="config-reloader" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.709688 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="thanos-sidecar" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.709700 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" containerName="prometheus" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.711873 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.714055 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xmthr" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.715301 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.715750 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.716053 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.716294 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.716473 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.716737 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.726727 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.746598 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.777745 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.857410 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.857811 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-config\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.857927 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.857968 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/13358646-85fa-4761-b4e8-ce5baf8851da-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858003 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/13358646-85fa-4761-b4e8-ce5baf8851da-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858042 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858126 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858168 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858248 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858323 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858387 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858540 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgk7d\" (UniqueName: \"kubernetes.io/projected/13358646-85fa-4761-b4e8-ce5baf8851da-kube-api-access-sgk7d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.858573 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960028 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-utilities\") pod \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960230 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz4fm\" (UniqueName: \"kubernetes.io/projected/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-kube-api-access-nz4fm\") pod \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960399 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-catalog-content\") pod \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\" (UID: \"d2f2ec0a-16c2-4808-8871-a8e56bd045a9\") " Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960656 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgk7d\" (UniqueName: \"kubernetes.io/projected/13358646-85fa-4761-b4e8-ce5baf8851da-kube-api-access-sgk7d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960689 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960725 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960742 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-config\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960834 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-utilities" (OuterVolumeSpecName: "utilities") pod "d2f2ec0a-16c2-4808-8871-a8e56bd045a9" (UID: "d2f2ec0a-16c2-4808-8871-a8e56bd045a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.960929 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.961469 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/13358646-85fa-4761-b4e8-ce5baf8851da-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.961530 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/13358646-85fa-4761-b4e8-ce5baf8851da-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.961565 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.961707 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.962143 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.962205 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.962806 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.962880 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.962897 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.962946 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.963528 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.963647 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/13358646-85fa-4761-b4e8-ce5baf8851da-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.965824 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-config\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.970296 4922 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.970346 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d461f0c4a551673a0d7d7003637451f1312f1b9722a2159a051859daee296e97/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.970454 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/13358646-85fa-4761-b4e8-ce5baf8851da-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.970573 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.971160 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-kube-api-access-nz4fm" (OuterVolumeSpecName: "kube-api-access-nz4fm") pod "d2f2ec0a-16c2-4808-8871-a8e56bd045a9" (UID: "d2f2ec0a-16c2-4808-8871-a8e56bd045a9"). InnerVolumeSpecName "kube-api-access-nz4fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.974585 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.976861 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.978623 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.980968 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/13358646-85fa-4761-b4e8-ce5baf8851da-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.981915 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/13358646-85fa-4761-b4e8-ce5baf8851da-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.982562 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgk7d\" (UniqueName: \"kubernetes.io/projected/13358646-85fa-4761-b4e8-ce5baf8851da-kube-api-access-sgk7d\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.984086 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2f2ec0a-16c2-4808-8871-a8e56bd045a9" (UID: "d2f2ec0a-16c2-4808-8871-a8e56bd045a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:22:40 crc kubenswrapper[4922]: I0218 12:22:40.986938 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="791eb2d2-e8b0-401d-8b8b-b79d675c1ca4" path="/var/lib/kubelet/pods/791eb2d2-e8b0-401d-8b8b-b79d675c1ca4/volumes" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.029980 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15ee050c-37dd-43df-8cbe-1200f09a5545\") pod \"prometheus-metric-storage-0\" (UID: \"13358646-85fa-4761-b4e8-ce5baf8851da\") " pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.064889 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.064931 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz4fm\" (UniqueName: \"kubernetes.io/projected/d2f2ec0a-16c2-4808-8871-a8e56bd045a9-kube-api-access-nz4fm\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.098494 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.331988 4922 generic.go:334] "Generic (PLEG): container finished" podID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerID="824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a" exitCode=0 Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.332675 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmwj8" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.332632 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmwj8" event={"ID":"d2f2ec0a-16c2-4808-8871-a8e56bd045a9","Type":"ContainerDied","Data":"824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a"} Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.332755 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmwj8" event={"ID":"d2f2ec0a-16c2-4808-8871-a8e56bd045a9","Type":"ContainerDied","Data":"c172eba0a6e88b586259ce4d77fcba9de3974f0deeaae0baf659acb7a41aaa60"} Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.332820 4922 scope.go:117] "RemoveContainer" containerID="824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.369103 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmwj8"] Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.381684 4922 scope.go:117] "RemoveContainer" containerID="e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.383173 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmwj8"] Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.438770 4922 scope.go:117] "RemoveContainer" containerID="5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.491968 4922 scope.go:117] "RemoveContainer" containerID="824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a" Feb 18 12:22:41 crc kubenswrapper[4922]: E0218 12:22:41.492439 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a\": container with ID starting with 824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a not found: ID does not exist" containerID="824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.492476 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a"} err="failed to get container status \"824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a\": rpc error: code = NotFound desc = could not find container \"824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a\": container with ID starting with 824db469829217c6af7f82c4435908cbbf5d6fa5600ef7ccdcf836213996956a not found: ID does not exist" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.492504 4922 scope.go:117] "RemoveContainer" containerID="e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723" Feb 18 12:22:41 crc kubenswrapper[4922]: E0218 12:22:41.492987 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723\": container with ID starting with e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723 not found: ID does not exist" containerID="e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.493036 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723"} err="failed to get container status \"e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723\": rpc error: code = NotFound desc = could not find container \"e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723\": container with ID starting with e7a62b2634ba91dccc174bd713229fd50823f1b237565279bd2b4231ccc19723 not found: ID does not exist" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.493063 4922 scope.go:117] "RemoveContainer" containerID="5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45" Feb 18 12:22:41 crc kubenswrapper[4922]: E0218 12:22:41.493348 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45\": container with ID starting with 5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45 not found: ID does not exist" containerID="5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.493415 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45"} err="failed to get container status \"5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45\": rpc error: code = NotFound desc = could not find container \"5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45\": container with ID starting with 5866c42fc40619d4c7f32758c298f20a5ee96d82a018e9ab977c8f97a23b5c45 not found: ID does not exist" Feb 18 12:22:41 crc kubenswrapper[4922]: I0218 12:22:41.591405 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 12:22:42 crc kubenswrapper[4922]: I0218 12:22:42.345425 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"13358646-85fa-4761-b4e8-ce5baf8851da","Type":"ContainerStarted","Data":"ef9587009618985e51ce2fcc04d7b7619474cc27c677aa1e820ca8841fcfc34c"} Feb 18 12:22:42 crc kubenswrapper[4922]: I0218 12:22:42.633597 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:42 crc kubenswrapper[4922]: I0218 12:22:42.688624 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:42 crc kubenswrapper[4922]: I0218 12:22:42.985960 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" path="/var/lib/kubelet/pods/d2f2ec0a-16c2-4808-8871-a8e56bd045a9/volumes" Feb 18 12:22:43 crc kubenswrapper[4922]: I0218 12:22:43.611657 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6t7b"] Feb 18 12:22:44 crc kubenswrapper[4922]: I0218 12:22:44.364796 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x6t7b" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="registry-server" containerID="cri-o://bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae" gracePeriod=2 Feb 18 12:22:44 crc kubenswrapper[4922]: I0218 12:22:44.815274 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:44 crc kubenswrapper[4922]: I0218 12:22:44.950795 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw68x\" (UniqueName: \"kubernetes.io/projected/6217a40d-f959-4afa-b48e-b25c1c1693c1-kube-api-access-rw68x\") pod \"6217a40d-f959-4afa-b48e-b25c1c1693c1\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " Feb 18 12:22:44 crc kubenswrapper[4922]: I0218 12:22:44.950861 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-catalog-content\") pod \"6217a40d-f959-4afa-b48e-b25c1c1693c1\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " Feb 18 12:22:44 crc kubenswrapper[4922]: I0218 12:22:44.951136 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-utilities\") pod \"6217a40d-f959-4afa-b48e-b25c1c1693c1\" (UID: \"6217a40d-f959-4afa-b48e-b25c1c1693c1\") " Feb 18 12:22:44 crc kubenswrapper[4922]: I0218 12:22:44.951878 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-utilities" (OuterVolumeSpecName: "utilities") pod "6217a40d-f959-4afa-b48e-b25c1c1693c1" (UID: "6217a40d-f959-4afa-b48e-b25c1c1693c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:22:44 crc kubenswrapper[4922]: I0218 12:22:44.959707 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6217a40d-f959-4afa-b48e-b25c1c1693c1-kube-api-access-rw68x" (OuterVolumeSpecName: "kube-api-access-rw68x") pod "6217a40d-f959-4afa-b48e-b25c1c1693c1" (UID: "6217a40d-f959-4afa-b48e-b25c1c1693c1"). InnerVolumeSpecName "kube-api-access-rw68x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.054910 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.054957 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw68x\" (UniqueName: \"kubernetes.io/projected/6217a40d-f959-4afa-b48e-b25c1c1693c1-kube-api-access-rw68x\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.125883 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6217a40d-f959-4afa-b48e-b25c1c1693c1" (UID: "6217a40d-f959-4afa-b48e-b25c1c1693c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.156422 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6217a40d-f959-4afa-b48e-b25c1c1693c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.373961 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"13358646-85fa-4761-b4e8-ce5baf8851da","Type":"ContainerStarted","Data":"8727adbd08a3ca5bed3dd1d23301105b8cb55bae17fe0f81c41857080c79500d"} Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.377461 4922 generic.go:334] "Generic (PLEG): container finished" podID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerID="bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae" exitCode=0 Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.377518 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6t7b" event={"ID":"6217a40d-f959-4afa-b48e-b25c1c1693c1","Type":"ContainerDied","Data":"bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae"} Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.377544 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6t7b" event={"ID":"6217a40d-f959-4afa-b48e-b25c1c1693c1","Type":"ContainerDied","Data":"01304453471ec40644ea6059952df8e463fbece81357fde7a770b3318d0244bd"} Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.377563 4922 scope.go:117] "RemoveContainer" containerID="bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.377942 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6t7b" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.399343 4922 scope.go:117] "RemoveContainer" containerID="cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.438584 4922 scope.go:117] "RemoveContainer" containerID="13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.440271 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6t7b"] Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.457743 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x6t7b"] Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.483714 4922 scope.go:117] "RemoveContainer" containerID="bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae" Feb 18 12:22:45 crc kubenswrapper[4922]: E0218 12:22:45.484581 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae\": container with ID starting with bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae not found: ID does not exist" containerID="bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.484628 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae"} err="failed to get container status \"bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae\": rpc error: code = NotFound desc = could not find container \"bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae\": container with ID starting with bbebd17e24478f9f7aaaca2db042a4da18b50d95dcd264d43cd200ad4b1d6eae not found: ID does not exist" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.484660 4922 scope.go:117] "RemoveContainer" containerID="cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227" Feb 18 12:22:45 crc kubenswrapper[4922]: E0218 12:22:45.485096 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227\": container with ID starting with cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227 not found: ID does not exist" containerID="cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.485150 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227"} err="failed to get container status \"cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227\": rpc error: code = NotFound desc = could not find container \"cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227\": container with ID starting with cbb7446ca5b63d5effa7c8e217cc14f792d76dd59052ea3d503ec58b1802b227 not found: ID does not exist" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.485186 4922 scope.go:117] "RemoveContainer" containerID="13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296" Feb 18 12:22:45 crc kubenswrapper[4922]: E0218 12:22:45.485495 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296\": container with ID starting with 13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296 not found: ID does not exist" containerID="13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296" Feb 18 12:22:45 crc kubenswrapper[4922]: I0218 12:22:45.485531 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296"} err="failed to get container status \"13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296\": rpc error: code = NotFound desc = could not find container \"13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296\": container with ID starting with 13a7f699d921da48e9cffe7b53f8c4c33c05acbf32d0e501384d7a7ecca47296 not found: ID does not exist" Feb 18 12:22:46 crc kubenswrapper[4922]: I0218 12:22:46.984929 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" path="/var/lib/kubelet/pods/6217a40d-f959-4afa-b48e-b25c1c1693c1/volumes" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.429891 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ktl86"] Feb 18 12:22:53 crc kubenswrapper[4922]: E0218 12:22:53.430925 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerName="extract-content" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.430947 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerName="extract-content" Feb 18 12:22:53 crc kubenswrapper[4922]: E0218 12:22:53.430973 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="registry-server" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.430981 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="registry-server" Feb 18 12:22:53 crc kubenswrapper[4922]: E0218 12:22:53.430989 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerName="registry-server" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.430997 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerName="registry-server" Feb 18 12:22:53 crc kubenswrapper[4922]: E0218 12:22:53.431021 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="extract-utilities" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.431029 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="extract-utilities" Feb 18 12:22:53 crc kubenswrapper[4922]: E0218 12:22:53.431054 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerName="extract-utilities" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.431062 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerName="extract-utilities" Feb 18 12:22:53 crc kubenswrapper[4922]: E0218 12:22:53.431072 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="extract-content" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.431080 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="extract-content" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.431296 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6217a40d-f959-4afa-b48e-b25c1c1693c1" containerName="registry-server" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.431333 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f2ec0a-16c2-4808-8871-a8e56bd045a9" containerName="registry-server" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.433051 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.451917 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ktl86"] Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.472184 4922 generic.go:334] "Generic (PLEG): container finished" podID="13358646-85fa-4761-b4e8-ce5baf8851da" containerID="8727adbd08a3ca5bed3dd1d23301105b8cb55bae17fe0f81c41857080c79500d" exitCode=0 Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.472239 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"13358646-85fa-4761-b4e8-ce5baf8851da","Type":"ContainerDied","Data":"8727adbd08a3ca5bed3dd1d23301105b8cb55bae17fe0f81c41857080c79500d"} Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.532478 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-catalog-content\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.533117 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x66mb\" (UniqueName: \"kubernetes.io/projected/6668bc10-67e0-40a0-bdf8-760c01e67ffb-kube-api-access-x66mb\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.533284 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-utilities\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.635084 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-catalog-content\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.635633 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-catalog-content\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.635645 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x66mb\" (UniqueName: \"kubernetes.io/projected/6668bc10-67e0-40a0-bdf8-760c01e67ffb-kube-api-access-x66mb\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.635811 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-utilities\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.636380 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-utilities\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.655811 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x66mb\" (UniqueName: \"kubernetes.io/projected/6668bc10-67e0-40a0-bdf8-760c01e67ffb-kube-api-access-x66mb\") pod \"certified-operators-ktl86\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:53 crc kubenswrapper[4922]: I0218 12:22:53.754083 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:22:54 crc kubenswrapper[4922]: W0218 12:22:54.245426 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6668bc10_67e0_40a0_bdf8_760c01e67ffb.slice/crio-5800716cdf2af93e328e6d448e636f39680309cf829bdaeeca901dc7afe39c91 WatchSource:0}: Error finding container 5800716cdf2af93e328e6d448e636f39680309cf829bdaeeca901dc7afe39c91: Status 404 returned error can't find the container with id 5800716cdf2af93e328e6d448e636f39680309cf829bdaeeca901dc7afe39c91 Feb 18 12:22:54 crc kubenswrapper[4922]: I0218 12:22:54.252800 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ktl86"] Feb 18 12:22:54 crc kubenswrapper[4922]: I0218 12:22:54.481948 4922 generic.go:334] "Generic (PLEG): container finished" podID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerID="f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9" exitCode=0 Feb 18 12:22:54 crc kubenswrapper[4922]: I0218 12:22:54.482074 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktl86" event={"ID":"6668bc10-67e0-40a0-bdf8-760c01e67ffb","Type":"ContainerDied","Data":"f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9"} Feb 18 12:22:54 crc kubenswrapper[4922]: I0218 12:22:54.482237 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktl86" event={"ID":"6668bc10-67e0-40a0-bdf8-760c01e67ffb","Type":"ContainerStarted","Data":"5800716cdf2af93e328e6d448e636f39680309cf829bdaeeca901dc7afe39c91"} Feb 18 12:22:54 crc kubenswrapper[4922]: I0218 12:22:54.484626 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"13358646-85fa-4761-b4e8-ce5baf8851da","Type":"ContainerStarted","Data":"35204e3522b95ed558a273f4aaedf1fd175ede45f45ff6adc97a12ea6fee34c8"} Feb 18 12:22:56 crc kubenswrapper[4922]: I0218 12:22:56.508194 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktl86" event={"ID":"6668bc10-67e0-40a0-bdf8-760c01e67ffb","Type":"ContainerStarted","Data":"05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e"} Feb 18 12:22:57 crc kubenswrapper[4922]: I0218 12:22:57.522489 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"13358646-85fa-4761-b4e8-ce5baf8851da","Type":"ContainerStarted","Data":"4bcbcff1cb39e7c9b088b3aa53e88e0fbd4078ea0f6f0e23b93e68bb1838f2da"} Feb 18 12:22:57 crc kubenswrapper[4922]: I0218 12:22:57.522851 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"13358646-85fa-4761-b4e8-ce5baf8851da","Type":"ContainerStarted","Data":"9399d8895b96cf629da034389c9ca80ee6f91e327a0d6f52e7d26d4dd7e3c8a3"} Feb 18 12:22:57 crc kubenswrapper[4922]: I0218 12:22:57.524508 4922 generic.go:334] "Generic (PLEG): container finished" podID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerID="05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e" exitCode=0 Feb 18 12:22:57 crc kubenswrapper[4922]: I0218 12:22:57.524543 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktl86" event={"ID":"6668bc10-67e0-40a0-bdf8-760c01e67ffb","Type":"ContainerDied","Data":"05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e"} Feb 18 12:22:57 crc kubenswrapper[4922]: I0218 12:22:57.554487 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.554453658 podStartE2EDuration="17.554453658s" podCreationTimestamp="2026-02-18 12:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:22:57.54741072 +0000 UTC m=+2779.275114820" watchObservedRunningTime="2026-02-18 12:22:57.554453658 +0000 UTC m=+2779.282157768" Feb 18 12:22:58 crc kubenswrapper[4922]: I0218 12:22:58.544941 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktl86" event={"ID":"6668bc10-67e0-40a0-bdf8-760c01e67ffb","Type":"ContainerStarted","Data":"2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc"} Feb 18 12:22:58 crc kubenswrapper[4922]: I0218 12:22:58.572534 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ktl86" podStartSLOduration=2.040614907 podStartE2EDuration="5.572506118s" podCreationTimestamp="2026-02-18 12:22:53 +0000 UTC" firstStartedPulling="2026-02-18 12:22:54.483505434 +0000 UTC m=+2776.211209514" lastFinishedPulling="2026-02-18 12:22:58.015396645 +0000 UTC m=+2779.743100725" observedRunningTime="2026-02-18 12:22:58.562754741 +0000 UTC m=+2780.290458811" watchObservedRunningTime="2026-02-18 12:22:58.572506118 +0000 UTC m=+2780.300210208" Feb 18 12:23:01 crc kubenswrapper[4922]: I0218 12:23:01.099129 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 12:23:03 crc kubenswrapper[4922]: I0218 12:23:03.754451 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:23:03 crc kubenswrapper[4922]: I0218 12:23:03.754956 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:23:03 crc kubenswrapper[4922]: I0218 12:23:03.810212 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:23:04 crc kubenswrapper[4922]: I0218 12:23:04.641242 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:23:04 crc kubenswrapper[4922]: I0218 12:23:04.692126 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ktl86"] Feb 18 12:23:06 crc kubenswrapper[4922]: I0218 12:23:06.614671 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ktl86" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerName="registry-server" containerID="cri-o://2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc" gracePeriod=2 Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.156400 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.195481 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-utilities\") pod \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.195884 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x66mb\" (UniqueName: \"kubernetes.io/projected/6668bc10-67e0-40a0-bdf8-760c01e67ffb-kube-api-access-x66mb\") pod \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.195958 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-catalog-content\") pod \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\" (UID: \"6668bc10-67e0-40a0-bdf8-760c01e67ffb\") " Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.200444 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-utilities" (OuterVolumeSpecName: "utilities") pod "6668bc10-67e0-40a0-bdf8-760c01e67ffb" (UID: "6668bc10-67e0-40a0-bdf8-760c01e67ffb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.208410 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6668bc10-67e0-40a0-bdf8-760c01e67ffb-kube-api-access-x66mb" (OuterVolumeSpecName: "kube-api-access-x66mb") pod "6668bc10-67e0-40a0-bdf8-760c01e67ffb" (UID: "6668bc10-67e0-40a0-bdf8-760c01e67ffb"). InnerVolumeSpecName "kube-api-access-x66mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.253746 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6668bc10-67e0-40a0-bdf8-760c01e67ffb" (UID: "6668bc10-67e0-40a0-bdf8-760c01e67ffb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.299769 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.299838 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x66mb\" (UniqueName: \"kubernetes.io/projected/6668bc10-67e0-40a0-bdf8-760c01e67ffb-kube-api-access-x66mb\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.299850 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6668bc10-67e0-40a0-bdf8-760c01e67ffb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.627230 4922 generic.go:334] "Generic (PLEG): container finished" podID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerID="2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc" exitCode=0 Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.627290 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktl86" event={"ID":"6668bc10-67e0-40a0-bdf8-760c01e67ffb","Type":"ContainerDied","Data":"2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc"} Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.627391 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ktl86" event={"ID":"6668bc10-67e0-40a0-bdf8-760c01e67ffb","Type":"ContainerDied","Data":"5800716cdf2af93e328e6d448e636f39680309cf829bdaeeca901dc7afe39c91"} Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.627418 4922 scope.go:117] "RemoveContainer" containerID="2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.627420 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ktl86" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.663081 4922 scope.go:117] "RemoveContainer" containerID="05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.686435 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ktl86"] Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.694707 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ktl86"] Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.709601 4922 scope.go:117] "RemoveContainer" containerID="f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.760655 4922 scope.go:117] "RemoveContainer" containerID="2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc" Feb 18 12:23:07 crc kubenswrapper[4922]: E0218 12:23:07.761313 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc\": container with ID starting with 2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc not found: ID does not exist" containerID="2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.761388 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc"} err="failed to get container status \"2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc\": rpc error: code = NotFound desc = could not find container \"2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc\": container with ID starting with 2f06fa7d1ccad53762185f070ac63faec3b39c0ad9069caaed5a221f878e67cc not found: ID does not exist" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.761418 4922 scope.go:117] "RemoveContainer" containerID="05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e" Feb 18 12:23:07 crc kubenswrapper[4922]: E0218 12:23:07.761766 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e\": container with ID starting with 05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e not found: ID does not exist" containerID="05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.761816 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e"} err="failed to get container status \"05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e\": rpc error: code = NotFound desc = could not find container \"05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e\": container with ID starting with 05b11aba883e59a330f5bc2e9bb5d474ccb35d90127c13922b78218898b4cb6e not found: ID does not exist" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.761836 4922 scope.go:117] "RemoveContainer" containerID="f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9" Feb 18 12:23:07 crc kubenswrapper[4922]: E0218 12:23:07.762126 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9\": container with ID starting with f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9 not found: ID does not exist" containerID="f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9" Feb 18 12:23:07 crc kubenswrapper[4922]: I0218 12:23:07.762181 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9"} err="failed to get container status \"f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9\": rpc error: code = NotFound desc = could not find container \"f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9\": container with ID starting with f52dca76a6dfec27e0a59ae63867fef5d319b89c5f69f836ea7fe6d9a54240c9 not found: ID does not exist" Feb 18 12:23:08 crc kubenswrapper[4922]: I0218 12:23:08.986201 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" path="/var/lib/kubelet/pods/6668bc10-67e0-40a0-bdf8-760c01e67ffb/volumes" Feb 18 12:23:11 crc kubenswrapper[4922]: I0218 12:23:11.099333 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 12:23:11 crc kubenswrapper[4922]: I0218 12:23:11.105354 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 12:23:11 crc kubenswrapper[4922]: I0218 12:23:11.675600 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.571156 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 12:23:34 crc kubenswrapper[4922]: E0218 12:23:34.572259 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerName="extract-content" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.572276 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerName="extract-content" Feb 18 12:23:34 crc kubenswrapper[4922]: E0218 12:23:34.572295 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerName="registry-server" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.572301 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerName="registry-server" Feb 18 12:23:34 crc kubenswrapper[4922]: E0218 12:23:34.572337 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerName="extract-utilities" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.572343 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerName="extract-utilities" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.572625 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="6668bc10-67e0-40a0-bdf8-760c01e67ffb" containerName="registry-server" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.573591 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.579202 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-thfnr" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.579503 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.579637 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.579768 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.587034 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.670758 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.670885 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqctm\" (UniqueName: \"kubernetes.io/projected/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-kube-api-access-vqctm\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.670970 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.671013 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.671065 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.671142 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-config-data\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.671187 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.671392 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.671432 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773105 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqctm\" (UniqueName: \"kubernetes.io/projected/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-kube-api-access-vqctm\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773211 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773259 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773311 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773435 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-config-data\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773460 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773520 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773544 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.773635 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.774690 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.775173 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.775396 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.775397 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.775569 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-config-data\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.784454 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.786489 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.786696 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.796324 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqctm\" (UniqueName: \"kubernetes.io/projected/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-kube-api-access-vqctm\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.813978 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " pod="openstack/tempest-tests-tempest" Feb 18 12:23:34 crc kubenswrapper[4922]: I0218 12:23:34.909020 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 12:23:35 crc kubenswrapper[4922]: I0218 12:23:35.402791 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 12:23:35 crc kubenswrapper[4922]: I0218 12:23:35.907565 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4525818f-9e1d-48a0-8ec1-1a22a0841dd4","Type":"ContainerStarted","Data":"492694f74401bc119697f1caa4fa178df1922c217659e262bc75d36660dd58d8"} Feb 18 12:23:47 crc kubenswrapper[4922]: I0218 12:23:47.268336 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 12:23:48 crc kubenswrapper[4922]: I0218 12:23:48.036463 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4525818f-9e1d-48a0-8ec1-1a22a0841dd4","Type":"ContainerStarted","Data":"e3b2b8b928d4d252bf46e4bb853a742c089adf478f27339f292b3bd6347dcdc0"} Feb 18 12:23:48 crc kubenswrapper[4922]: I0218 12:23:48.058832 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.198247956 podStartE2EDuration="15.058814711s" podCreationTimestamp="2026-02-18 12:23:33 +0000 UTC" firstStartedPulling="2026-02-18 12:23:35.404485018 +0000 UTC m=+2817.132189108" lastFinishedPulling="2026-02-18 12:23:47.265051753 +0000 UTC m=+2828.992755863" observedRunningTime="2026-02-18 12:23:48.052319787 +0000 UTC m=+2829.780023887" watchObservedRunningTime="2026-02-18 12:23:48.058814711 +0000 UTC m=+2829.786518791" Feb 18 12:24:09 crc kubenswrapper[4922]: I0218 12:24:09.808105 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:24:09 crc kubenswrapper[4922]: I0218 12:24:09.808782 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:24:39 crc kubenswrapper[4922]: I0218 12:24:39.808089 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:24:39 crc kubenswrapper[4922]: I0218 12:24:39.808623 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:25:09 crc kubenswrapper[4922]: I0218 12:25:09.807734 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:25:09 crc kubenswrapper[4922]: I0218 12:25:09.808275 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:25:09 crc kubenswrapper[4922]: I0218 12:25:09.808319 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:25:09 crc kubenswrapper[4922]: I0218 12:25:09.808865 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef51f81fc06c6367e33199b91ef56039257473dad48437b29bc57a1983bf05c7"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:25:09 crc kubenswrapper[4922]: I0218 12:25:09.808932 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://ef51f81fc06c6367e33199b91ef56039257473dad48437b29bc57a1983bf05c7" gracePeriod=600 Feb 18 12:25:10 crc kubenswrapper[4922]: I0218 12:25:10.779526 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="ef51f81fc06c6367e33199b91ef56039257473dad48437b29bc57a1983bf05c7" exitCode=0 Feb 18 12:25:10 crc kubenswrapper[4922]: I0218 12:25:10.779589 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"ef51f81fc06c6367e33199b91ef56039257473dad48437b29bc57a1983bf05c7"} Feb 18 12:25:10 crc kubenswrapper[4922]: I0218 12:25:10.779958 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a"} Feb 18 12:25:10 crc kubenswrapper[4922]: I0218 12:25:10.779980 4922 scope.go:117] "RemoveContainer" containerID="9330193a32435c3bb58e3c62888aafde935dba8cdd2799d11f365fb7bef953ab" Feb 18 12:27:29 crc kubenswrapper[4922]: I0218 12:27:29.882878 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kz8nw"] Feb 18 12:27:29 crc kubenswrapper[4922]: I0218 12:27:29.886761 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:29 crc kubenswrapper[4922]: I0218 12:27:29.906907 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kz8nw"] Feb 18 12:27:29 crc kubenswrapper[4922]: I0218 12:27:29.917856 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-utilities\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:29 crc kubenswrapper[4922]: I0218 12:27:29.918165 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z44tg\" (UniqueName: \"kubernetes.io/projected/476deb39-d3e2-47b1-a10a-1043938fbbe0-kube-api-access-z44tg\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:29 crc kubenswrapper[4922]: I0218 12:27:29.918330 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-catalog-content\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:30 crc kubenswrapper[4922]: I0218 12:27:30.019529 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z44tg\" (UniqueName: \"kubernetes.io/projected/476deb39-d3e2-47b1-a10a-1043938fbbe0-kube-api-access-z44tg\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:30 crc kubenswrapper[4922]: I0218 12:27:30.019612 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-catalog-content\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:30 crc kubenswrapper[4922]: I0218 12:27:30.019664 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-utilities\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:30 crc kubenswrapper[4922]: I0218 12:27:30.020130 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-catalog-content\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:30 crc kubenswrapper[4922]: I0218 12:27:30.020222 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-utilities\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:30 crc kubenswrapper[4922]: I0218 12:27:30.045066 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z44tg\" (UniqueName: \"kubernetes.io/projected/476deb39-d3e2-47b1-a10a-1043938fbbe0-kube-api-access-z44tg\") pod \"community-operators-kz8nw\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:30 crc kubenswrapper[4922]: I0218 12:27:30.221141 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:30 crc kubenswrapper[4922]: I0218 12:27:30.784835 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kz8nw"] Feb 18 12:27:31 crc kubenswrapper[4922]: I0218 12:27:31.023746 4922 generic.go:334] "Generic (PLEG): container finished" podID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerID="fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5" exitCode=0 Feb 18 12:27:31 crc kubenswrapper[4922]: I0218 12:27:31.023856 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz8nw" event={"ID":"476deb39-d3e2-47b1-a10a-1043938fbbe0","Type":"ContainerDied","Data":"fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5"} Feb 18 12:27:31 crc kubenswrapper[4922]: I0218 12:27:31.024063 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz8nw" event={"ID":"476deb39-d3e2-47b1-a10a-1043938fbbe0","Type":"ContainerStarted","Data":"9cc0ee33444edd82c3aef2b27c7dd32f17623fbbae0694e5eaf22e7c8ce3e73c"} Feb 18 12:27:31 crc kubenswrapper[4922]: I0218 12:27:31.026509 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:27:33 crc kubenswrapper[4922]: I0218 12:27:33.044760 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz8nw" event={"ID":"476deb39-d3e2-47b1-a10a-1043938fbbe0","Type":"ContainerStarted","Data":"93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb"} Feb 18 12:27:35 crc kubenswrapper[4922]: I0218 12:27:35.067731 4922 generic.go:334] "Generic (PLEG): container finished" podID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerID="93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb" exitCode=0 Feb 18 12:27:35 crc kubenswrapper[4922]: I0218 12:27:35.068074 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz8nw" event={"ID":"476deb39-d3e2-47b1-a10a-1043938fbbe0","Type":"ContainerDied","Data":"93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb"} Feb 18 12:27:36 crc kubenswrapper[4922]: I0218 12:27:36.077101 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz8nw" event={"ID":"476deb39-d3e2-47b1-a10a-1043938fbbe0","Type":"ContainerStarted","Data":"2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a"} Feb 18 12:27:36 crc kubenswrapper[4922]: I0218 12:27:36.099520 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kz8nw" podStartSLOduration=2.654062371 podStartE2EDuration="7.099505061s" podCreationTimestamp="2026-02-18 12:27:29 +0000 UTC" firstStartedPulling="2026-02-18 12:27:31.02546449 +0000 UTC m=+3052.753168570" lastFinishedPulling="2026-02-18 12:27:35.47090718 +0000 UTC m=+3057.198611260" observedRunningTime="2026-02-18 12:27:36.09515096 +0000 UTC m=+3057.822855070" watchObservedRunningTime="2026-02-18 12:27:36.099505061 +0000 UTC m=+3057.827209141" Feb 18 12:27:39 crc kubenswrapper[4922]: I0218 12:27:39.807863 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:27:39 crc kubenswrapper[4922]: I0218 12:27:39.808491 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:27:40 crc kubenswrapper[4922]: I0218 12:27:40.221312 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:40 crc kubenswrapper[4922]: I0218 12:27:40.221378 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:40 crc kubenswrapper[4922]: I0218 12:27:40.275109 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:41 crc kubenswrapper[4922]: I0218 12:27:41.166982 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:44 crc kubenswrapper[4922]: I0218 12:27:44.472119 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kz8nw"] Feb 18 12:27:44 crc kubenswrapper[4922]: I0218 12:27:44.472465 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kz8nw" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerName="registry-server" containerID="cri-o://2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a" gracePeriod=2 Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.150976 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.158743 4922 generic.go:334] "Generic (PLEG): container finished" podID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerID="2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a" exitCode=0 Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.158791 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz8nw" event={"ID":"476deb39-d3e2-47b1-a10a-1043938fbbe0","Type":"ContainerDied","Data":"2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a"} Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.158833 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kz8nw" event={"ID":"476deb39-d3e2-47b1-a10a-1043938fbbe0","Type":"ContainerDied","Data":"9cc0ee33444edd82c3aef2b27c7dd32f17623fbbae0694e5eaf22e7c8ce3e73c"} Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.158857 4922 scope.go:117] "RemoveContainer" containerID="2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.158925 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kz8nw" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.187606 4922 scope.go:117] "RemoveContainer" containerID="93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.215879 4922 scope.go:117] "RemoveContainer" containerID="fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.273495 4922 scope.go:117] "RemoveContainer" containerID="2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a" Feb 18 12:27:45 crc kubenswrapper[4922]: E0218 12:27:45.275527 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a\": container with ID starting with 2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a not found: ID does not exist" containerID="2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.275579 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a"} err="failed to get container status \"2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a\": rpc error: code = NotFound desc = could not find container \"2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a\": container with ID starting with 2218e06474151611f107b759e15bcdd7c89bff8e2dddc851df6132106966779a not found: ID does not exist" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.275613 4922 scope.go:117] "RemoveContainer" containerID="93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb" Feb 18 12:27:45 crc kubenswrapper[4922]: E0218 12:27:45.276005 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb\": container with ID starting with 93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb not found: ID does not exist" containerID="93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.276035 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb"} err="failed to get container status \"93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb\": rpc error: code = NotFound desc = could not find container \"93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb\": container with ID starting with 93a952cae5ab5bceca4d519c84887375579cf400e7a8973df36383eb70a98dbb not found: ID does not exist" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.276051 4922 scope.go:117] "RemoveContainer" containerID="fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5" Feb 18 12:27:45 crc kubenswrapper[4922]: E0218 12:27:45.276337 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5\": container with ID starting with fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5 not found: ID does not exist" containerID="fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.276481 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5"} err="failed to get container status \"fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5\": rpc error: code = NotFound desc = could not find container \"fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5\": container with ID starting with fe11e3a24b6b9e456a8b43ffe889d179e3cf3376d32b416411f7b9cf898ab3c5 not found: ID does not exist" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.340915 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-catalog-content\") pod \"476deb39-d3e2-47b1-a10a-1043938fbbe0\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.341024 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z44tg\" (UniqueName: \"kubernetes.io/projected/476deb39-d3e2-47b1-a10a-1043938fbbe0-kube-api-access-z44tg\") pod \"476deb39-d3e2-47b1-a10a-1043938fbbe0\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.341094 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-utilities\") pod \"476deb39-d3e2-47b1-a10a-1043938fbbe0\" (UID: \"476deb39-d3e2-47b1-a10a-1043938fbbe0\") " Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.342249 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-utilities" (OuterVolumeSpecName: "utilities") pod "476deb39-d3e2-47b1-a10a-1043938fbbe0" (UID: "476deb39-d3e2-47b1-a10a-1043938fbbe0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.347827 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476deb39-d3e2-47b1-a10a-1043938fbbe0-kube-api-access-z44tg" (OuterVolumeSpecName: "kube-api-access-z44tg") pod "476deb39-d3e2-47b1-a10a-1043938fbbe0" (UID: "476deb39-d3e2-47b1-a10a-1043938fbbe0"). InnerVolumeSpecName "kube-api-access-z44tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.389344 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "476deb39-d3e2-47b1-a10a-1043938fbbe0" (UID: "476deb39-d3e2-47b1-a10a-1043938fbbe0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.443236 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.443282 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z44tg\" (UniqueName: \"kubernetes.io/projected/476deb39-d3e2-47b1-a10a-1043938fbbe0-kube-api-access-z44tg\") on node \"crc\" DevicePath \"\"" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.443296 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/476deb39-d3e2-47b1-a10a-1043938fbbe0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.499860 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kz8nw"] Feb 18 12:27:45 crc kubenswrapper[4922]: I0218 12:27:45.512869 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kz8nw"] Feb 18 12:27:46 crc kubenswrapper[4922]: I0218 12:27:46.985806 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" path="/var/lib/kubelet/pods/476deb39-d3e2-47b1-a10a-1043938fbbe0/volumes" Feb 18 12:28:09 crc kubenswrapper[4922]: I0218 12:28:09.807978 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:28:09 crc kubenswrapper[4922]: I0218 12:28:09.809215 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:28:39 crc kubenswrapper[4922]: I0218 12:28:39.807588 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:28:39 crc kubenswrapper[4922]: I0218 12:28:39.808208 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:28:39 crc kubenswrapper[4922]: I0218 12:28:39.808270 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:28:39 crc kubenswrapper[4922]: I0218 12:28:39.809146 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:28:39 crc kubenswrapper[4922]: I0218 12:28:39.809219 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" gracePeriod=600 Feb 18 12:28:39 crc kubenswrapper[4922]: E0218 12:28:39.929500 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:28:40 crc kubenswrapper[4922]: I0218 12:28:40.673965 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" exitCode=0 Feb 18 12:28:40 crc kubenswrapper[4922]: I0218 12:28:40.674035 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a"} Feb 18 12:28:40 crc kubenswrapper[4922]: I0218 12:28:40.674287 4922 scope.go:117] "RemoveContainer" containerID="ef51f81fc06c6367e33199b91ef56039257473dad48437b29bc57a1983bf05c7" Feb 18 12:28:40 crc kubenswrapper[4922]: I0218 12:28:40.675167 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:28:40 crc kubenswrapper[4922]: E0218 12:28:40.675727 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:28:54 crc kubenswrapper[4922]: I0218 12:28:54.973272 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:28:54 crc kubenswrapper[4922]: E0218 12:28:54.974423 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:29:05 crc kubenswrapper[4922]: I0218 12:29:05.973684 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:29:05 crc kubenswrapper[4922]: E0218 12:29:05.974431 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:29:20 crc kubenswrapper[4922]: I0218 12:29:20.973114 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:29:20 crc kubenswrapper[4922]: E0218 12:29:20.974076 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:29:31 crc kubenswrapper[4922]: I0218 12:29:31.973322 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:29:31 crc kubenswrapper[4922]: E0218 12:29:31.974132 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:29:42 crc kubenswrapper[4922]: I0218 12:29:42.973558 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:29:42 crc kubenswrapper[4922]: E0218 12:29:42.974421 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:29:54 crc kubenswrapper[4922]: I0218 12:29:54.973243 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:29:54 crc kubenswrapper[4922]: E0218 12:29:54.974104 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.151269 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw"] Feb 18 12:30:00 crc kubenswrapper[4922]: E0218 12:30:00.152255 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerName="extract-content" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.152269 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerName="extract-content" Feb 18 12:30:00 crc kubenswrapper[4922]: E0218 12:30:00.152284 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerName="extract-utilities" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.152290 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerName="extract-utilities" Feb 18 12:30:00 crc kubenswrapper[4922]: E0218 12:30:00.152302 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerName="registry-server" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.152308 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerName="registry-server" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.152598 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="476deb39-d3e2-47b1-a10a-1043938fbbe0" containerName="registry-server" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.153280 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.155807 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.156289 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.165982 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw"] Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.326982 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9t25\" (UniqueName: \"kubernetes.io/projected/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-kube-api-access-g9t25\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.327152 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-secret-volume\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.327199 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-config-volume\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.429393 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9t25\" (UniqueName: \"kubernetes.io/projected/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-kube-api-access-g9t25\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.429512 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-secret-volume\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.429547 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-config-volume\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.430414 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-config-volume\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.435017 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-secret-volume\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.449675 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9t25\" (UniqueName: \"kubernetes.io/projected/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-kube-api-access-g9t25\") pod \"collect-profiles-29523630-956lw\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.477620 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:00 crc kubenswrapper[4922]: I0218 12:30:00.919033 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw"] Feb 18 12:30:01 crc kubenswrapper[4922]: I0218 12:30:01.391383 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" event={"ID":"e56f5497-9f1a-455d-8d92-36f5dbcafe8b","Type":"ContainerStarted","Data":"97add8b5c3088aee73c0bc262f74aeb973fd2de8d5be81106a8100386b524ddc"} Feb 18 12:30:01 crc kubenswrapper[4922]: I0218 12:30:01.391695 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" event={"ID":"e56f5497-9f1a-455d-8d92-36f5dbcafe8b","Type":"ContainerStarted","Data":"ffffbeddb8706dab52f0e684d84866f3959d4a3ec913158eddbdaed70cb3377e"} Feb 18 12:30:01 crc kubenswrapper[4922]: I0218 12:30:01.411812 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" podStartSLOduration=1.411794609 podStartE2EDuration="1.411794609s" podCreationTimestamp="2026-02-18 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:30:01.404375992 +0000 UTC m=+3203.132080072" watchObservedRunningTime="2026-02-18 12:30:01.411794609 +0000 UTC m=+3203.139498689" Feb 18 12:30:02 crc kubenswrapper[4922]: I0218 12:30:02.402585 4922 generic.go:334] "Generic (PLEG): container finished" podID="e56f5497-9f1a-455d-8d92-36f5dbcafe8b" containerID="97add8b5c3088aee73c0bc262f74aeb973fd2de8d5be81106a8100386b524ddc" exitCode=0 Feb 18 12:30:02 crc kubenswrapper[4922]: I0218 12:30:02.402640 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" event={"ID":"e56f5497-9f1a-455d-8d92-36f5dbcafe8b","Type":"ContainerDied","Data":"97add8b5c3088aee73c0bc262f74aeb973fd2de8d5be81106a8100386b524ddc"} Feb 18 12:30:03 crc kubenswrapper[4922]: I0218 12:30:03.839305 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.007961 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-config-volume\") pod \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.008210 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9t25\" (UniqueName: \"kubernetes.io/projected/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-kube-api-access-g9t25\") pod \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.008248 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-secret-volume\") pod \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\" (UID: \"e56f5497-9f1a-455d-8d92-36f5dbcafe8b\") " Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.008655 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e56f5497-9f1a-455d-8d92-36f5dbcafe8b" (UID: "e56f5497-9f1a-455d-8d92-36f5dbcafe8b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.013846 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e56f5497-9f1a-455d-8d92-36f5dbcafe8b" (UID: "e56f5497-9f1a-455d-8d92-36f5dbcafe8b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.014062 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-kube-api-access-g9t25" (OuterVolumeSpecName: "kube-api-access-g9t25") pod "e56f5497-9f1a-455d-8d92-36f5dbcafe8b" (UID: "e56f5497-9f1a-455d-8d92-36f5dbcafe8b"). InnerVolumeSpecName "kube-api-access-g9t25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.110839 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9t25\" (UniqueName: \"kubernetes.io/projected/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-kube-api-access-g9t25\") on node \"crc\" DevicePath \"\"" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.110875 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.110884 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e56f5497-9f1a-455d-8d92-36f5dbcafe8b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.420182 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" event={"ID":"e56f5497-9f1a-455d-8d92-36f5dbcafe8b","Type":"ContainerDied","Data":"ffffbeddb8706dab52f0e684d84866f3959d4a3ec913158eddbdaed70cb3377e"} Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.420231 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-956lw" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.420243 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffffbeddb8706dab52f0e684d84866f3959d4a3ec913158eddbdaed70cb3377e" Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.484489 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4"] Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.494528 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-sckv4"] Feb 18 12:30:04 crc kubenswrapper[4922]: I0218 12:30:04.990917 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74bd299a-42ac-4c5a-93ff-5809da5517b3" path="/var/lib/kubelet/pods/74bd299a-42ac-4c5a-93ff-5809da5517b3/volumes" Feb 18 12:30:05 crc kubenswrapper[4922]: I0218 12:30:05.973895 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:30:05 crc kubenswrapper[4922]: E0218 12:30:05.974543 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:30:13 crc kubenswrapper[4922]: I0218 12:30:13.460564 4922 scope.go:117] "RemoveContainer" containerID="1e57799f76ef61ec42eb4d7506cd5272291d57133dccf113ac6a6ed7f96b16b6" Feb 18 12:30:16 crc kubenswrapper[4922]: I0218 12:30:16.973043 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:30:16 crc kubenswrapper[4922]: E0218 12:30:16.975119 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:30:28 crc kubenswrapper[4922]: I0218 12:30:28.981632 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:30:28 crc kubenswrapper[4922]: E0218 12:30:28.982485 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:30:41 crc kubenswrapper[4922]: I0218 12:30:41.972829 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:30:41 crc kubenswrapper[4922]: E0218 12:30:41.973854 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:30:53 crc kubenswrapper[4922]: I0218 12:30:53.973479 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:30:53 crc kubenswrapper[4922]: E0218 12:30:53.974576 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:31:05 crc kubenswrapper[4922]: I0218 12:31:05.973143 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:31:05 crc kubenswrapper[4922]: E0218 12:31:05.974312 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:31:20 crc kubenswrapper[4922]: I0218 12:31:20.973499 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:31:20 crc kubenswrapper[4922]: E0218 12:31:20.974253 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:31:32 crc kubenswrapper[4922]: I0218 12:31:32.973684 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:31:32 crc kubenswrapper[4922]: E0218 12:31:32.974674 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:31:43 crc kubenswrapper[4922]: I0218 12:31:43.973961 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:31:43 crc kubenswrapper[4922]: E0218 12:31:43.974722 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:31:54 crc kubenswrapper[4922]: I0218 12:31:54.973903 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:31:54 crc kubenswrapper[4922]: E0218 12:31:54.974795 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:32:09 crc kubenswrapper[4922]: I0218 12:32:09.972908 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:32:09 crc kubenswrapper[4922]: E0218 12:32:09.973717 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:32:20 crc kubenswrapper[4922]: I0218 12:32:20.973783 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:32:20 crc kubenswrapper[4922]: E0218 12:32:20.974599 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:32:35 crc kubenswrapper[4922]: I0218 12:32:35.973595 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:32:35 crc kubenswrapper[4922]: E0218 12:32:35.974311 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.178020 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n6xrc"] Feb 18 12:32:36 crc kubenswrapper[4922]: E0218 12:32:36.178421 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56f5497-9f1a-455d-8d92-36f5dbcafe8b" containerName="collect-profiles" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.178438 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56f5497-9f1a-455d-8d92-36f5dbcafe8b" containerName="collect-profiles" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.178667 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56f5497-9f1a-455d-8d92-36f5dbcafe8b" containerName="collect-profiles" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.185822 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.258830 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n6xrc"] Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.344849 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8k9h\" (UniqueName: \"kubernetes.io/projected/5e74836e-69fc-4faa-ac09-05926ad4810a-kube-api-access-b8k9h\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.345178 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e74836e-69fc-4faa-ac09-05926ad4810a-catalog-content\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.345204 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e74836e-69fc-4faa-ac09-05926ad4810a-utilities\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.446728 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8k9h\" (UniqueName: \"kubernetes.io/projected/5e74836e-69fc-4faa-ac09-05926ad4810a-kube-api-access-b8k9h\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.446902 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e74836e-69fc-4faa-ac09-05926ad4810a-catalog-content\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.446931 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e74836e-69fc-4faa-ac09-05926ad4810a-utilities\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.447529 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e74836e-69fc-4faa-ac09-05926ad4810a-catalog-content\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.447590 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e74836e-69fc-4faa-ac09-05926ad4810a-utilities\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.472396 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8k9h\" (UniqueName: \"kubernetes.io/projected/5e74836e-69fc-4faa-ac09-05926ad4810a-kube-api-access-b8k9h\") pod \"redhat-operators-n6xrc\" (UID: \"5e74836e-69fc-4faa-ac09-05926ad4810a\") " pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.509841 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:36 crc kubenswrapper[4922]: I0218 12:32:36.998619 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n6xrc"] Feb 18 12:32:37 crc kubenswrapper[4922]: I0218 12:32:37.767682 4922 generic.go:334] "Generic (PLEG): container finished" podID="5e74836e-69fc-4faa-ac09-05926ad4810a" containerID="8ee471cb05a8d2384f66e39b6b4361473f43b7c046668715e859cf3505d7cf06" exitCode=0 Feb 18 12:32:37 crc kubenswrapper[4922]: I0218 12:32:37.767812 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6xrc" event={"ID":"5e74836e-69fc-4faa-ac09-05926ad4810a","Type":"ContainerDied","Data":"8ee471cb05a8d2384f66e39b6b4361473f43b7c046668715e859cf3505d7cf06"} Feb 18 12:32:37 crc kubenswrapper[4922]: I0218 12:32:37.768060 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6xrc" event={"ID":"5e74836e-69fc-4faa-ac09-05926ad4810a","Type":"ContainerStarted","Data":"1e8088badc46814acfe2155e1deee69be9d1251ea91af4603dca25b1daef5e16"} Feb 18 12:32:37 crc kubenswrapper[4922]: I0218 12:32:37.770275 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:32:48 crc kubenswrapper[4922]: I0218 12:32:48.881716 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6xrc" event={"ID":"5e74836e-69fc-4faa-ac09-05926ad4810a","Type":"ContainerStarted","Data":"afd804f20029a44d8ec673f1f244f41a3d5964ef86b2b8983cfaeb8895a19e3a"} Feb 18 12:32:50 crc kubenswrapper[4922]: I0218 12:32:50.906739 4922 generic.go:334] "Generic (PLEG): container finished" podID="5e74836e-69fc-4faa-ac09-05926ad4810a" containerID="afd804f20029a44d8ec673f1f244f41a3d5964ef86b2b8983cfaeb8895a19e3a" exitCode=0 Feb 18 12:32:50 crc kubenswrapper[4922]: I0218 12:32:50.906825 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6xrc" event={"ID":"5e74836e-69fc-4faa-ac09-05926ad4810a","Type":"ContainerDied","Data":"afd804f20029a44d8ec673f1f244f41a3d5964ef86b2b8983cfaeb8895a19e3a"} Feb 18 12:32:50 crc kubenswrapper[4922]: I0218 12:32:50.973276 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:32:50 crc kubenswrapper[4922]: E0218 12:32:50.973636 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:32:51 crc kubenswrapper[4922]: I0218 12:32:51.917807 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n6xrc" event={"ID":"5e74836e-69fc-4faa-ac09-05926ad4810a","Type":"ContainerStarted","Data":"c25c01881e6a39dfb1732a51ced35ab964fcbb9c5bc0168947ec789be35548e9"} Feb 18 12:32:51 crc kubenswrapper[4922]: I0218 12:32:51.943284 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n6xrc" podStartSLOduration=2.203397433 podStartE2EDuration="15.943266309s" podCreationTimestamp="2026-02-18 12:32:36 +0000 UTC" firstStartedPulling="2026-02-18 12:32:37.769929758 +0000 UTC m=+3359.497633858" lastFinishedPulling="2026-02-18 12:32:51.509798664 +0000 UTC m=+3373.237502734" observedRunningTime="2026-02-18 12:32:51.934458266 +0000 UTC m=+3373.662162376" watchObservedRunningTime="2026-02-18 12:32:51.943266309 +0000 UTC m=+3373.670970389" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.366022 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8hj8v"] Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.369562 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.383139 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hj8v"] Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.477538 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-utilities\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.477698 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-catalog-content\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.477728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mdcw\" (UniqueName: \"kubernetes.io/projected/bd456290-ba12-4116-9a58-04ed7fcba476-kube-api-access-4mdcw\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.579521 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-catalog-content\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.579841 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mdcw\" (UniqueName: \"kubernetes.io/projected/bd456290-ba12-4116-9a58-04ed7fcba476-kube-api-access-4mdcw\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.579962 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-utilities\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.580452 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-utilities\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.580866 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-catalog-content\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.635616 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mdcw\" (UniqueName: \"kubernetes.io/projected/bd456290-ba12-4116-9a58-04ed7fcba476-kube-api-access-4mdcw\") pod \"redhat-marketplace-8hj8v\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:53 crc kubenswrapper[4922]: I0218 12:32:53.708864 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:32:54 crc kubenswrapper[4922]: I0218 12:32:54.845500 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hj8v"] Feb 18 12:32:54 crc kubenswrapper[4922]: I0218 12:32:54.949170 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hj8v" event={"ID":"bd456290-ba12-4116-9a58-04ed7fcba476","Type":"ContainerStarted","Data":"37abbd67b2d6143cc216bd867afee16da3fa841e67518e563a9512ece0b85196"} Feb 18 12:32:55 crc kubenswrapper[4922]: I0218 12:32:55.962555 4922 generic.go:334] "Generic (PLEG): container finished" podID="bd456290-ba12-4116-9a58-04ed7fcba476" containerID="296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6" exitCode=0 Feb 18 12:32:55 crc kubenswrapper[4922]: I0218 12:32:55.962648 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hj8v" event={"ID":"bd456290-ba12-4116-9a58-04ed7fcba476","Type":"ContainerDied","Data":"296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6"} Feb 18 12:32:56 crc kubenswrapper[4922]: I0218 12:32:56.510434 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:56 crc kubenswrapper[4922]: I0218 12:32:56.510746 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:32:57 crc kubenswrapper[4922]: I0218 12:32:57.000968 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hj8v" event={"ID":"bd456290-ba12-4116-9a58-04ed7fcba476","Type":"ContainerStarted","Data":"dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778"} Feb 18 12:32:57 crc kubenswrapper[4922]: I0218 12:32:57.556844 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n6xrc" podUID="5e74836e-69fc-4faa-ac09-05926ad4810a" containerName="registry-server" probeResult="failure" output=< Feb 18 12:32:57 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:32:57 crc kubenswrapper[4922]: > Feb 18 12:32:58 crc kubenswrapper[4922]: I0218 12:32:58.011020 4922 generic.go:334] "Generic (PLEG): container finished" podID="bd456290-ba12-4116-9a58-04ed7fcba476" containerID="dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778" exitCode=0 Feb 18 12:32:58 crc kubenswrapper[4922]: I0218 12:32:58.011059 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hj8v" event={"ID":"bd456290-ba12-4116-9a58-04ed7fcba476","Type":"ContainerDied","Data":"dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778"} Feb 18 12:32:59 crc kubenswrapper[4922]: I0218 12:32:59.022942 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hj8v" event={"ID":"bd456290-ba12-4116-9a58-04ed7fcba476","Type":"ContainerStarted","Data":"6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452"} Feb 18 12:33:03 crc kubenswrapper[4922]: I0218 12:33:03.565287 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:33:03 crc kubenswrapper[4922]: E0218 12:33:03.566035 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:33:03 crc kubenswrapper[4922]: I0218 12:33:03.709605 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:33:03 crc kubenswrapper[4922]: I0218 12:33:03.709651 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:33:03 crc kubenswrapper[4922]: I0218 12:33:03.767489 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:33:03 crc kubenswrapper[4922]: I0218 12:33:03.785228 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8hj8v" podStartSLOduration=8.336300466 podStartE2EDuration="10.785209327s" podCreationTimestamp="2026-02-18 12:32:53 +0000 UTC" firstStartedPulling="2026-02-18 12:32:55.964185991 +0000 UTC m=+3377.691890071" lastFinishedPulling="2026-02-18 12:32:58.413094852 +0000 UTC m=+3380.140798932" observedRunningTime="2026-02-18 12:32:59.043999131 +0000 UTC m=+3380.771703221" watchObservedRunningTime="2026-02-18 12:33:03.785209327 +0000 UTC m=+3385.512913407" Feb 18 12:33:04 crc kubenswrapper[4922]: I0218 12:33:04.645378 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:33:04 crc kubenswrapper[4922]: I0218 12:33:04.699172 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hj8v"] Feb 18 12:33:06 crc kubenswrapper[4922]: I0218 12:33:06.562628 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:33:06 crc kubenswrapper[4922]: I0218 12:33:06.611474 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n6xrc" Feb 18 12:33:06 crc kubenswrapper[4922]: I0218 12:33:06.614865 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8hj8v" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" containerName="registry-server" containerID="cri-o://6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452" gracePeriod=2 Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.121982 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.224140 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-catalog-content\") pod \"bd456290-ba12-4116-9a58-04ed7fcba476\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.224302 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-utilities\") pod \"bd456290-ba12-4116-9a58-04ed7fcba476\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.224476 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mdcw\" (UniqueName: \"kubernetes.io/projected/bd456290-ba12-4116-9a58-04ed7fcba476-kube-api-access-4mdcw\") pod \"bd456290-ba12-4116-9a58-04ed7fcba476\" (UID: \"bd456290-ba12-4116-9a58-04ed7fcba476\") " Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.225395 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-utilities" (OuterVolumeSpecName: "utilities") pod "bd456290-ba12-4116-9a58-04ed7fcba476" (UID: "bd456290-ba12-4116-9a58-04ed7fcba476"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.226199 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.232200 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd456290-ba12-4116-9a58-04ed7fcba476-kube-api-access-4mdcw" (OuterVolumeSpecName: "kube-api-access-4mdcw") pod "bd456290-ba12-4116-9a58-04ed7fcba476" (UID: "bd456290-ba12-4116-9a58-04ed7fcba476"). InnerVolumeSpecName "kube-api-access-4mdcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.241835 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n6xrc"] Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.256324 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd456290-ba12-4116-9a58-04ed7fcba476" (UID: "bd456290-ba12-4116-9a58-04ed7fcba476"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.328689 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd456290-ba12-4116-9a58-04ed7fcba476-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.328891 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mdcw\" (UniqueName: \"kubernetes.io/projected/bd456290-ba12-4116-9a58-04ed7fcba476-kube-api-access-4mdcw\") on node \"crc\" DevicePath \"\"" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.602175 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48d4t"] Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.602418 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-48d4t" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerName="registry-server" containerID="cri-o://377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55" gracePeriod=2 Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.637633 4922 generic.go:334] "Generic (PLEG): container finished" podID="bd456290-ba12-4116-9a58-04ed7fcba476" containerID="6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452" exitCode=0 Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.638296 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hj8v" event={"ID":"bd456290-ba12-4116-9a58-04ed7fcba476","Type":"ContainerDied","Data":"6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452"} Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.638380 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8hj8v" event={"ID":"bd456290-ba12-4116-9a58-04ed7fcba476","Type":"ContainerDied","Data":"37abbd67b2d6143cc216bd867afee16da3fa841e67518e563a9512ece0b85196"} Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.638387 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8hj8v" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.638404 4922 scope.go:117] "RemoveContainer" containerID="6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.660851 4922 scope.go:117] "RemoveContainer" containerID="dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.681975 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hj8v"] Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.692124 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8hj8v"] Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.698547 4922 scope.go:117] "RemoveContainer" containerID="296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.854587 4922 scope.go:117] "RemoveContainer" containerID="6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452" Feb 18 12:33:07 crc kubenswrapper[4922]: E0218 12:33:07.855161 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452\": container with ID starting with 6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452 not found: ID does not exist" containerID="6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.855207 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452"} err="failed to get container status \"6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452\": rpc error: code = NotFound desc = could not find container \"6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452\": container with ID starting with 6cd64925f7df3c941d9123838b532a293586d9f1541dbd6b6b614a998aac6452 not found: ID does not exist" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.855231 4922 scope.go:117] "RemoveContainer" containerID="dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778" Feb 18 12:33:07 crc kubenswrapper[4922]: E0218 12:33:07.855993 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778\": container with ID starting with dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778 not found: ID does not exist" containerID="dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.856030 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778"} err="failed to get container status \"dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778\": rpc error: code = NotFound desc = could not find container \"dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778\": container with ID starting with dc6eae0b7e35dd4caea940146fcc325f1c9769371fb668a92d65ea86bae48778 not found: ID does not exist" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.856053 4922 scope.go:117] "RemoveContainer" containerID="296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6" Feb 18 12:33:07 crc kubenswrapper[4922]: E0218 12:33:07.856527 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6\": container with ID starting with 296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6 not found: ID does not exist" containerID="296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6" Feb 18 12:33:07 crc kubenswrapper[4922]: I0218 12:33:07.856568 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6"} err="failed to get container status \"296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6\": rpc error: code = NotFound desc = could not find container \"296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6\": container with ID starting with 296617ba970beec03022dcbaf12a401d44ce2babd980348b145fc78e2c54a9d6 not found: ID does not exist" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.164572 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.247268 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-utilities\") pod \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.247334 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-catalog-content\") pod \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.247476 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2pw6\" (UniqueName: \"kubernetes.io/projected/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-kube-api-access-g2pw6\") pod \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\" (UID: \"f1faa074-0925-4c46-b2d7-3d5590f2bfb2\") " Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.254671 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-utilities" (OuterVolumeSpecName: "utilities") pod "f1faa074-0925-4c46-b2d7-3d5590f2bfb2" (UID: "f1faa074-0925-4c46-b2d7-3d5590f2bfb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.259767 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-kube-api-access-g2pw6" (OuterVolumeSpecName: "kube-api-access-g2pw6") pod "f1faa074-0925-4c46-b2d7-3d5590f2bfb2" (UID: "f1faa074-0925-4c46-b2d7-3d5590f2bfb2"). InnerVolumeSpecName "kube-api-access-g2pw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.349239 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.349275 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2pw6\" (UniqueName: \"kubernetes.io/projected/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-kube-api-access-g2pw6\") on node \"crc\" DevicePath \"\"" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.517317 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1faa074-0925-4c46-b2d7-3d5590f2bfb2" (UID: "f1faa074-0925-4c46-b2d7-3d5590f2bfb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.553226 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1faa074-0925-4c46-b2d7-3d5590f2bfb2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.653670 4922 generic.go:334] "Generic (PLEG): container finished" podID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerID="377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55" exitCode=0 Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.653718 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48d4t" event={"ID":"f1faa074-0925-4c46-b2d7-3d5590f2bfb2","Type":"ContainerDied","Data":"377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55"} Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.653748 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48d4t" event={"ID":"f1faa074-0925-4c46-b2d7-3d5590f2bfb2","Type":"ContainerDied","Data":"b63935ff693150fab51446b824532dfaf0826366b92ab601cfc7bb7ab4edc1ce"} Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.653771 4922 scope.go:117] "RemoveContainer" containerID="377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.653891 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48d4t" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.701684 4922 scope.go:117] "RemoveContainer" containerID="23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.702599 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48d4t"] Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.714704 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-48d4t"] Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.734941 4922 scope.go:117] "RemoveContainer" containerID="9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.805130 4922 scope.go:117] "RemoveContainer" containerID="377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55" Feb 18 12:33:08 crc kubenswrapper[4922]: E0218 12:33:08.805560 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55\": container with ID starting with 377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55 not found: ID does not exist" containerID="377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.805592 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55"} err="failed to get container status \"377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55\": rpc error: code = NotFound desc = could not find container \"377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55\": container with ID starting with 377e6020d0b42c5266d8273b9b34ce499f228a6c68677cbc4c4e34b4940a1f55 not found: ID does not exist" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.805612 4922 scope.go:117] "RemoveContainer" containerID="23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6" Feb 18 12:33:08 crc kubenswrapper[4922]: E0218 12:33:08.806009 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6\": container with ID starting with 23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6 not found: ID does not exist" containerID="23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.806032 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6"} err="failed to get container status \"23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6\": rpc error: code = NotFound desc = could not find container \"23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6\": container with ID starting with 23f296a4f83051f275c5ad75133b9d989ffbb5e5773e3cf9682aa82e8d0b8ec6 not found: ID does not exist" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.806045 4922 scope.go:117] "RemoveContainer" containerID="9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4" Feb 18 12:33:08 crc kubenswrapper[4922]: E0218 12:33:08.806557 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4\": container with ID starting with 9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4 not found: ID does not exist" containerID="9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4" Feb 18 12:33:08 crc kubenswrapper[4922]: I0218 12:33:08.806608 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4"} err="failed to get container status \"9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4\": rpc error: code = NotFound desc = could not find container \"9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4\": container with ID starting with 9e3a83e756a30ee92f9449f7ddc8339c21a1abdeda0792499d4a21920ed6c1c4 not found: ID does not exist" Feb 18 12:33:09 crc kubenswrapper[4922]: I0218 12:33:09.009076 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" path="/var/lib/kubelet/pods/bd456290-ba12-4116-9a58-04ed7fcba476/volumes" Feb 18 12:33:09 crc kubenswrapper[4922]: I0218 12:33:09.018618 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" path="/var/lib/kubelet/pods/f1faa074-0925-4c46-b2d7-3d5590f2bfb2/volumes" Feb 18 12:33:14 crc kubenswrapper[4922]: I0218 12:33:14.973053 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:33:14 crc kubenswrapper[4922]: E0218 12:33:14.973790 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:33:25 crc kubenswrapper[4922]: I0218 12:33:25.973269 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:33:25 crc kubenswrapper[4922]: E0218 12:33:25.974120 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:33:39 crc kubenswrapper[4922]: I0218 12:33:39.972743 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:33:40 crc kubenswrapper[4922]: I0218 12:33:40.930978 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"617209a8a29981601b193fadb47bd31c1969eee3e663e9025cd5b23991442654"} Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.645713 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vqpdr"] Feb 18 12:34:06 crc kubenswrapper[4922]: E0218 12:34:06.646743 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerName="registry-server" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.646762 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerName="registry-server" Feb 18 12:34:06 crc kubenswrapper[4922]: E0218 12:34:06.646775 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerName="extract-content" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.646783 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerName="extract-content" Feb 18 12:34:06 crc kubenswrapper[4922]: E0218 12:34:06.646801 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" containerName="extract-content" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.646811 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" containerName="extract-content" Feb 18 12:34:06 crc kubenswrapper[4922]: E0218 12:34:06.646830 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerName="extract-utilities" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.646838 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerName="extract-utilities" Feb 18 12:34:06 crc kubenswrapper[4922]: E0218 12:34:06.646863 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" containerName="registry-server" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.646871 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" containerName="registry-server" Feb 18 12:34:06 crc kubenswrapper[4922]: E0218 12:34:06.646886 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" containerName="extract-utilities" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.646893 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" containerName="extract-utilities" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.647141 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd456290-ba12-4116-9a58-04ed7fcba476" containerName="registry-server" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.647156 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1faa074-0925-4c46-b2d7-3d5590f2bfb2" containerName="registry-server" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.649950 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.652900 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmq7z\" (UniqueName: \"kubernetes.io/projected/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-kube-api-access-vmq7z\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.653195 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-catalog-content\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.653276 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-utilities\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.660079 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqpdr"] Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.754801 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-utilities\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.755117 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmq7z\" (UniqueName: \"kubernetes.io/projected/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-kube-api-access-vmq7z\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.755232 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-catalog-content\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.755605 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-utilities\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.755686 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-catalog-content\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.779953 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmq7z\" (UniqueName: \"kubernetes.io/projected/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-kube-api-access-vmq7z\") pod \"certified-operators-vqpdr\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:06 crc kubenswrapper[4922]: I0218 12:34:06.977267 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:07 crc kubenswrapper[4922]: I0218 12:34:07.469805 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqpdr"] Feb 18 12:34:07 crc kubenswrapper[4922]: I0218 12:34:07.645587 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqpdr" event={"ID":"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5","Type":"ContainerStarted","Data":"e9c90e41c6e32a490492f7cdbaf233137eb96ac3ecd876f57a74114cd5f4c029"} Feb 18 12:34:08 crc kubenswrapper[4922]: I0218 12:34:08.654873 4922 generic.go:334] "Generic (PLEG): container finished" podID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerID="d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e" exitCode=0 Feb 18 12:34:08 crc kubenswrapper[4922]: I0218 12:34:08.655087 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqpdr" event={"ID":"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5","Type":"ContainerDied","Data":"d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e"} Feb 18 12:34:10 crc kubenswrapper[4922]: I0218 12:34:10.674149 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqpdr" event={"ID":"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5","Type":"ContainerStarted","Data":"4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f"} Feb 18 12:34:11 crc kubenswrapper[4922]: I0218 12:34:11.683968 4922 generic.go:334] "Generic (PLEG): container finished" podID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerID="4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f" exitCode=0 Feb 18 12:34:11 crc kubenswrapper[4922]: I0218 12:34:11.684088 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqpdr" event={"ID":"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5","Type":"ContainerDied","Data":"4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f"} Feb 18 12:34:12 crc kubenswrapper[4922]: I0218 12:34:12.702079 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqpdr" event={"ID":"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5","Type":"ContainerStarted","Data":"f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e"} Feb 18 12:34:12 crc kubenswrapper[4922]: I0218 12:34:12.721960 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vqpdr" podStartSLOduration=3.296533479 podStartE2EDuration="6.721938648s" podCreationTimestamp="2026-02-18 12:34:06 +0000 UTC" firstStartedPulling="2026-02-18 12:34:08.656591271 +0000 UTC m=+3450.384295351" lastFinishedPulling="2026-02-18 12:34:12.08199642 +0000 UTC m=+3453.809700520" observedRunningTime="2026-02-18 12:34:12.718931722 +0000 UTC m=+3454.446635802" watchObservedRunningTime="2026-02-18 12:34:12.721938648 +0000 UTC m=+3454.449642728" Feb 18 12:34:16 crc kubenswrapper[4922]: I0218 12:34:16.985488 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:16 crc kubenswrapper[4922]: I0218 12:34:16.986085 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:17 crc kubenswrapper[4922]: I0218 12:34:17.028285 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:17 crc kubenswrapper[4922]: I0218 12:34:17.789275 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:17 crc kubenswrapper[4922]: I0218 12:34:17.838026 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqpdr"] Feb 18 12:34:19 crc kubenswrapper[4922]: I0218 12:34:19.765710 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vqpdr" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerName="registry-server" containerID="cri-o://f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e" gracePeriod=2 Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.312378 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.419217 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmq7z\" (UniqueName: \"kubernetes.io/projected/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-kube-api-access-vmq7z\") pod \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.419270 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-catalog-content\") pod \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.419547 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-utilities\") pod \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\" (UID: \"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5\") " Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.420480 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-utilities" (OuterVolumeSpecName: "utilities") pod "d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" (UID: "d1f3187c-1e35-4bda-a0d3-2969d44c1ad5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.426483 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-kube-api-access-vmq7z" (OuterVolumeSpecName: "kube-api-access-vmq7z") pod "d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" (UID: "d1f3187c-1e35-4bda-a0d3-2969d44c1ad5"). InnerVolumeSpecName "kube-api-access-vmq7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.521749 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.521955 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmq7z\" (UniqueName: \"kubernetes.io/projected/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-kube-api-access-vmq7z\") on node \"crc\" DevicePath \"\"" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.775208 4922 generic.go:334] "Generic (PLEG): container finished" podID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerID="f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e" exitCode=0 Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.775532 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqpdr" event={"ID":"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5","Type":"ContainerDied","Data":"f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e"} Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.775580 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqpdr" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.775674 4922 scope.go:117] "RemoveContainer" containerID="f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.775653 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqpdr" event={"ID":"d1f3187c-1e35-4bda-a0d3-2969d44c1ad5","Type":"ContainerDied","Data":"e9c90e41c6e32a490492f7cdbaf233137eb96ac3ecd876f57a74114cd5f4c029"} Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.795884 4922 scope.go:117] "RemoveContainer" containerID="4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.818554 4922 scope.go:117] "RemoveContainer" containerID="d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.869133 4922 scope.go:117] "RemoveContainer" containerID="f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e" Feb 18 12:34:20 crc kubenswrapper[4922]: E0218 12:34:20.869500 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e\": container with ID starting with f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e not found: ID does not exist" containerID="f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.869544 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e"} err="failed to get container status \"f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e\": rpc error: code = NotFound desc = could not find container \"f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e\": container with ID starting with f1c7e176a50ab4ad1ed3a22837e437925e2c95b8a9a2c11b7bebb33d69dfda4e not found: ID does not exist" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.869573 4922 scope.go:117] "RemoveContainer" containerID="4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f" Feb 18 12:34:20 crc kubenswrapper[4922]: E0218 12:34:20.869915 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f\": container with ID starting with 4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f not found: ID does not exist" containerID="4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.869956 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f"} err="failed to get container status \"4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f\": rpc error: code = NotFound desc = could not find container \"4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f\": container with ID starting with 4b6426978a8c701dfd748b188e2342eeb4b3fe08cb2fc2f0798d400869864f5f not found: ID does not exist" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.869990 4922 scope.go:117] "RemoveContainer" containerID="d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e" Feb 18 12:34:20 crc kubenswrapper[4922]: E0218 12:34:20.870271 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e\": container with ID starting with d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e not found: ID does not exist" containerID="d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.870303 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e"} err="failed to get container status \"d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e\": rpc error: code = NotFound desc = could not find container \"d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e\": container with ID starting with d3ee751d5747cbf6f52f6c827ede651bb584829aee903c09210e24a6a7b80f9e not found: ID does not exist" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.892510 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" (UID: "d1f3187c-1e35-4bda-a0d3-2969d44c1ad5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:34:20 crc kubenswrapper[4922]: I0218 12:34:20.928229 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:34:21 crc kubenswrapper[4922]: I0218 12:34:21.099387 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqpdr"] Feb 18 12:34:21 crc kubenswrapper[4922]: I0218 12:34:21.109395 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vqpdr"] Feb 18 12:34:22 crc kubenswrapper[4922]: I0218 12:34:22.984785 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" path="/var/lib/kubelet/pods/d1f3187c-1e35-4bda-a0d3-2969d44c1ad5/volumes" Feb 18 12:36:09 crc kubenswrapper[4922]: I0218 12:36:09.807697 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:36:09 crc kubenswrapper[4922]: I0218 12:36:09.808217 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:36:39 crc kubenswrapper[4922]: I0218 12:36:39.807849 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:36:39 crc kubenswrapper[4922]: I0218 12:36:39.808424 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:37:09 crc kubenswrapper[4922]: I0218 12:37:09.807931 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:37:09 crc kubenswrapper[4922]: I0218 12:37:09.809216 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:37:09 crc kubenswrapper[4922]: I0218 12:37:09.809303 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:37:09 crc kubenswrapper[4922]: I0218 12:37:09.810074 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"617209a8a29981601b193fadb47bd31c1969eee3e663e9025cd5b23991442654"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:37:09 crc kubenswrapper[4922]: I0218 12:37:09.810143 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://617209a8a29981601b193fadb47bd31c1969eee3e663e9025cd5b23991442654" gracePeriod=600 Feb 18 12:37:10 crc kubenswrapper[4922]: I0218 12:37:10.387215 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="617209a8a29981601b193fadb47bd31c1969eee3e663e9025cd5b23991442654" exitCode=0 Feb 18 12:37:10 crc kubenswrapper[4922]: I0218 12:37:10.387353 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"617209a8a29981601b193fadb47bd31c1969eee3e663e9025cd5b23991442654"} Feb 18 12:37:10 crc kubenswrapper[4922]: I0218 12:37:10.387670 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62"} Feb 18 12:37:10 crc kubenswrapper[4922]: I0218 12:37:10.387688 4922 scope.go:117] "RemoveContainer" containerID="16600860fb4ac9263fdc6d81dfcafd7d172469c3e817b2a15138f9381adaba7a" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.771792 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rj9ln"] Feb 18 12:38:04 crc kubenswrapper[4922]: E0218 12:38:04.772922 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerName="extract-content" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.772939 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerName="extract-content" Feb 18 12:38:04 crc kubenswrapper[4922]: E0218 12:38:04.772958 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerName="registry-server" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.772966 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerName="registry-server" Feb 18 12:38:04 crc kubenswrapper[4922]: E0218 12:38:04.772992 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerName="extract-utilities" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.773002 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerName="extract-utilities" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.773273 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f3187c-1e35-4bda-a0d3-2969d44c1ad5" containerName="registry-server" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.775603 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.786743 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj9ln"] Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.895198 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqsr\" (UniqueName: \"kubernetes.io/projected/9802c743-deb5-4b6c-9484-2336cb49265c-kube-api-access-ppqsr\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.895926 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-utilities\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.896452 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-catalog-content\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.999071 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-catalog-content\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.999337 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqsr\" (UniqueName: \"kubernetes.io/projected/9802c743-deb5-4b6c-9484-2336cb49265c-kube-api-access-ppqsr\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.999385 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-utilities\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.999626 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-catalog-content\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:04 crc kubenswrapper[4922]: I0218 12:38:04.999965 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-utilities\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:05 crc kubenswrapper[4922]: I0218 12:38:05.030399 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqsr\" (UniqueName: \"kubernetes.io/projected/9802c743-deb5-4b6c-9484-2336cb49265c-kube-api-access-ppqsr\") pod \"community-operators-rj9ln\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:05 crc kubenswrapper[4922]: I0218 12:38:05.103393 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:05 crc kubenswrapper[4922]: I0218 12:38:05.615702 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj9ln"] Feb 18 12:38:06 crc kubenswrapper[4922]: I0218 12:38:06.111935 4922 generic.go:334] "Generic (PLEG): container finished" podID="9802c743-deb5-4b6c-9484-2336cb49265c" containerID="0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f" exitCode=0 Feb 18 12:38:06 crc kubenswrapper[4922]: I0218 12:38:06.111980 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj9ln" event={"ID":"9802c743-deb5-4b6c-9484-2336cb49265c","Type":"ContainerDied","Data":"0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f"} Feb 18 12:38:06 crc kubenswrapper[4922]: I0218 12:38:06.112026 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj9ln" event={"ID":"9802c743-deb5-4b6c-9484-2336cb49265c","Type":"ContainerStarted","Data":"3eb4a03c2248714dc926c7d21c85f1bf14169f529c1694e23cb56a243ebfe742"} Feb 18 12:38:06 crc kubenswrapper[4922]: I0218 12:38:06.114935 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:38:08 crc kubenswrapper[4922]: I0218 12:38:08.133839 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj9ln" event={"ID":"9802c743-deb5-4b6c-9484-2336cb49265c","Type":"ContainerStarted","Data":"801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf"} Feb 18 12:38:13 crc kubenswrapper[4922]: I0218 12:38:13.176048 4922 generic.go:334] "Generic (PLEG): container finished" podID="9802c743-deb5-4b6c-9484-2336cb49265c" containerID="801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf" exitCode=0 Feb 18 12:38:13 crc kubenswrapper[4922]: I0218 12:38:13.176117 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj9ln" event={"ID":"9802c743-deb5-4b6c-9484-2336cb49265c","Type":"ContainerDied","Data":"801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf"} Feb 18 12:38:15 crc kubenswrapper[4922]: I0218 12:38:15.200437 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj9ln" event={"ID":"9802c743-deb5-4b6c-9484-2336cb49265c","Type":"ContainerStarted","Data":"8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126"} Feb 18 12:38:15 crc kubenswrapper[4922]: I0218 12:38:15.226143 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rj9ln" podStartSLOduration=3.265438528 podStartE2EDuration="11.226120474s" podCreationTimestamp="2026-02-18 12:38:04 +0000 UTC" firstStartedPulling="2026-02-18 12:38:06.114045678 +0000 UTC m=+3687.841749758" lastFinishedPulling="2026-02-18 12:38:14.074727604 +0000 UTC m=+3695.802431704" observedRunningTime="2026-02-18 12:38:15.21887643 +0000 UTC m=+3696.946580510" watchObservedRunningTime="2026-02-18 12:38:15.226120474 +0000 UTC m=+3696.953824554" Feb 18 12:38:25 crc kubenswrapper[4922]: I0218 12:38:25.103871 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:25 crc kubenswrapper[4922]: I0218 12:38:25.104656 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:25 crc kubenswrapper[4922]: I0218 12:38:25.151270 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:25 crc kubenswrapper[4922]: I0218 12:38:25.352912 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:25 crc kubenswrapper[4922]: I0218 12:38:25.407314 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj9ln"] Feb 18 12:38:27 crc kubenswrapper[4922]: I0218 12:38:27.307916 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rj9ln" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" containerName="registry-server" containerID="cri-o://8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126" gracePeriod=2 Feb 18 12:38:27 crc kubenswrapper[4922]: I0218 12:38:27.867877 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.058764 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-utilities\") pod \"9802c743-deb5-4b6c-9484-2336cb49265c\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.058838 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppqsr\" (UniqueName: \"kubernetes.io/projected/9802c743-deb5-4b6c-9484-2336cb49265c-kube-api-access-ppqsr\") pod \"9802c743-deb5-4b6c-9484-2336cb49265c\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.058960 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-catalog-content\") pod \"9802c743-deb5-4b6c-9484-2336cb49265c\" (UID: \"9802c743-deb5-4b6c-9484-2336cb49265c\") " Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.059733 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-utilities" (OuterVolumeSpecName: "utilities") pod "9802c743-deb5-4b6c-9484-2336cb49265c" (UID: "9802c743-deb5-4b6c-9484-2336cb49265c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.067282 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9802c743-deb5-4b6c-9484-2336cb49265c-kube-api-access-ppqsr" (OuterVolumeSpecName: "kube-api-access-ppqsr") pod "9802c743-deb5-4b6c-9484-2336cb49265c" (UID: "9802c743-deb5-4b6c-9484-2336cb49265c"). InnerVolumeSpecName "kube-api-access-ppqsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.121523 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9802c743-deb5-4b6c-9484-2336cb49265c" (UID: "9802c743-deb5-4b6c-9484-2336cb49265c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.162058 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.162098 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppqsr\" (UniqueName: \"kubernetes.io/projected/9802c743-deb5-4b6c-9484-2336cb49265c-kube-api-access-ppqsr\") on node \"crc\" DevicePath \"\"" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.162111 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9802c743-deb5-4b6c-9484-2336cb49265c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.320116 4922 generic.go:334] "Generic (PLEG): container finished" podID="9802c743-deb5-4b6c-9484-2336cb49265c" containerID="8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126" exitCode=0 Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.320175 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj9ln" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.320173 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj9ln" event={"ID":"9802c743-deb5-4b6c-9484-2336cb49265c","Type":"ContainerDied","Data":"8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126"} Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.320315 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj9ln" event={"ID":"9802c743-deb5-4b6c-9484-2336cb49265c","Type":"ContainerDied","Data":"3eb4a03c2248714dc926c7d21c85f1bf14169f529c1694e23cb56a243ebfe742"} Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.320341 4922 scope.go:117] "RemoveContainer" containerID="8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.343854 4922 scope.go:117] "RemoveContainer" containerID="801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.364753 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rj9ln"] Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.379043 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rj9ln"] Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.380496 4922 scope.go:117] "RemoveContainer" containerID="0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.420055 4922 scope.go:117] "RemoveContainer" containerID="8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126" Feb 18 12:38:28 crc kubenswrapper[4922]: E0218 12:38:28.420523 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126\": container with ID starting with 8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126 not found: ID does not exist" containerID="8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.420564 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126"} err="failed to get container status \"8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126\": rpc error: code = NotFound desc = could not find container \"8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126\": container with ID starting with 8c12b89aeab1b8e1855ba7f24cf663759741c9a00c975fe91969456b730c9126 not found: ID does not exist" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.420591 4922 scope.go:117] "RemoveContainer" containerID="801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf" Feb 18 12:38:28 crc kubenswrapper[4922]: E0218 12:38:28.420871 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf\": container with ID starting with 801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf not found: ID does not exist" containerID="801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.420942 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf"} err="failed to get container status \"801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf\": rpc error: code = NotFound desc = could not find container \"801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf\": container with ID starting with 801d1c22b45cb2e89fe95fe3ea38c34d7591eb2afb03c53f75e7344cc3e55fcf not found: ID does not exist" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.420984 4922 scope.go:117] "RemoveContainer" containerID="0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f" Feb 18 12:38:28 crc kubenswrapper[4922]: E0218 12:38:28.421404 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f\": container with ID starting with 0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f not found: ID does not exist" containerID="0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.421437 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f"} err="failed to get container status \"0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f\": rpc error: code = NotFound desc = could not find container \"0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f\": container with ID starting with 0fdda781022745ae094779b7a89ad71538c6121b5be91abadd095ef46c0b037f not found: ID does not exist" Feb 18 12:38:28 crc kubenswrapper[4922]: I0218 12:38:28.987275 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" path="/var/lib/kubelet/pods/9802c743-deb5-4b6c-9484-2336cb49265c/volumes" Feb 18 12:39:39 crc kubenswrapper[4922]: I0218 12:39:39.807584 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:39:39 crc kubenswrapper[4922]: I0218 12:39:39.808188 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:40:09 crc kubenswrapper[4922]: I0218 12:40:09.808123 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:40:09 crc kubenswrapper[4922]: I0218 12:40:09.808951 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:40:39 crc kubenswrapper[4922]: I0218 12:40:39.806941 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:40:39 crc kubenswrapper[4922]: I0218 12:40:39.808228 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:40:39 crc kubenswrapper[4922]: I0218 12:40:39.808610 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:40:39 crc kubenswrapper[4922]: I0218 12:40:39.809675 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:40:39 crc kubenswrapper[4922]: I0218 12:40:39.809756 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" gracePeriod=600 Feb 18 12:40:39 crc kubenswrapper[4922]: E0218 12:40:39.941726 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:40:40 crc kubenswrapper[4922]: I0218 12:40:40.515076 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" exitCode=0 Feb 18 12:40:40 crc kubenswrapper[4922]: I0218 12:40:40.515153 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62"} Feb 18 12:40:40 crc kubenswrapper[4922]: I0218 12:40:40.515503 4922 scope.go:117] "RemoveContainer" containerID="617209a8a29981601b193fadb47bd31c1969eee3e663e9025cd5b23991442654" Feb 18 12:40:40 crc kubenswrapper[4922]: I0218 12:40:40.516200 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:40:40 crc kubenswrapper[4922]: E0218 12:40:40.516507 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:40:54 crc kubenswrapper[4922]: I0218 12:40:54.973676 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:40:54 crc kubenswrapper[4922]: E0218 12:40:54.974544 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:41:05 crc kubenswrapper[4922]: I0218 12:41:05.973255 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:41:05 crc kubenswrapper[4922]: E0218 12:41:05.974090 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:41:18 crc kubenswrapper[4922]: I0218 12:41:18.979634 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:41:18 crc kubenswrapper[4922]: E0218 12:41:18.980575 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:41:32 crc kubenswrapper[4922]: I0218 12:41:32.973001 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:41:32 crc kubenswrapper[4922]: E0218 12:41:32.973778 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:41:46 crc kubenswrapper[4922]: I0218 12:41:46.976306 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:41:46 crc kubenswrapper[4922]: E0218 12:41:46.977326 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:42:00 crc kubenswrapper[4922]: I0218 12:42:00.973270 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:42:00 crc kubenswrapper[4922]: E0218 12:42:00.974055 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:42:15 crc kubenswrapper[4922]: I0218 12:42:15.974196 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:42:15 crc kubenswrapper[4922]: E0218 12:42:15.975066 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:42:28 crc kubenswrapper[4922]: I0218 12:42:28.981143 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:42:28 crc kubenswrapper[4922]: E0218 12:42:28.982067 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.251915 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-clkkl"] Feb 18 12:42:42 crc kubenswrapper[4922]: E0218 12:42:42.253032 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" containerName="extract-content" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.253051 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" containerName="extract-content" Feb 18 12:42:42 crc kubenswrapper[4922]: E0218 12:42:42.253090 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" containerName="registry-server" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.253098 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" containerName="registry-server" Feb 18 12:42:42 crc kubenswrapper[4922]: E0218 12:42:42.253110 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" containerName="extract-utilities" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.253119 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" containerName="extract-utilities" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.253334 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9802c743-deb5-4b6c-9484-2336cb49265c" containerName="registry-server" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.255115 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.264056 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-clkkl"] Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.326180 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-catalog-content\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.326571 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsfp\" (UniqueName: \"kubernetes.io/projected/63579133-a220-43c4-a314-288bc8f38929-kube-api-access-5xsfp\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.326792 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-utilities\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.429570 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xsfp\" (UniqueName: \"kubernetes.io/projected/63579133-a220-43c4-a314-288bc8f38929-kube-api-access-5xsfp\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.429700 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-utilities\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.429877 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-catalog-content\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.430351 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-utilities\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.430393 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-catalog-content\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.452429 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xsfp\" (UniqueName: \"kubernetes.io/projected/63579133-a220-43c4-a314-288bc8f38929-kube-api-access-5xsfp\") pod \"redhat-operators-clkkl\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.580561 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:42:42 crc kubenswrapper[4922]: I0218 12:42:42.973550 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:42:42 crc kubenswrapper[4922]: E0218 12:42:42.974203 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:42:43 crc kubenswrapper[4922]: I0218 12:42:43.083055 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-clkkl"] Feb 18 12:42:43 crc kubenswrapper[4922]: I0218 12:42:43.569007 4922 generic.go:334] "Generic (PLEG): container finished" podID="63579133-a220-43c4-a314-288bc8f38929" containerID="e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663" exitCode=0 Feb 18 12:42:43 crc kubenswrapper[4922]: I0218 12:42:43.569053 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clkkl" event={"ID":"63579133-a220-43c4-a314-288bc8f38929","Type":"ContainerDied","Data":"e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663"} Feb 18 12:42:43 crc kubenswrapper[4922]: I0218 12:42:43.569077 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clkkl" event={"ID":"63579133-a220-43c4-a314-288bc8f38929","Type":"ContainerStarted","Data":"dd225ac6793d793b66eff5b5c2ffd76a38a665620014f94d05e48321aa82ad41"} Feb 18 12:42:45 crc kubenswrapper[4922]: I0218 12:42:45.590476 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clkkl" event={"ID":"63579133-a220-43c4-a314-288bc8f38929","Type":"ContainerStarted","Data":"196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221"} Feb 18 12:42:53 crc kubenswrapper[4922]: I0218 12:42:53.682482 4922 generic.go:334] "Generic (PLEG): container finished" podID="63579133-a220-43c4-a314-288bc8f38929" containerID="196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221" exitCode=0 Feb 18 12:42:53 crc kubenswrapper[4922]: I0218 12:42:53.682562 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clkkl" event={"ID":"63579133-a220-43c4-a314-288bc8f38929","Type":"ContainerDied","Data":"196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221"} Feb 18 12:42:54 crc kubenswrapper[4922]: I0218 12:42:54.693837 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clkkl" event={"ID":"63579133-a220-43c4-a314-288bc8f38929","Type":"ContainerStarted","Data":"a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99"} Feb 18 12:42:54 crc kubenswrapper[4922]: I0218 12:42:54.713049 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-clkkl" podStartSLOduration=2.202911295 podStartE2EDuration="12.71302573s" podCreationTimestamp="2026-02-18 12:42:42 +0000 UTC" firstStartedPulling="2026-02-18 12:42:43.57176106 +0000 UTC m=+3965.299465140" lastFinishedPulling="2026-02-18 12:42:54.081875495 +0000 UTC m=+3975.809579575" observedRunningTime="2026-02-18 12:42:54.711355078 +0000 UTC m=+3976.439059168" watchObservedRunningTime="2026-02-18 12:42:54.71302573 +0000 UTC m=+3976.440729810" Feb 18 12:42:56 crc kubenswrapper[4922]: I0218 12:42:56.974514 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:42:56 crc kubenswrapper[4922]: E0218 12:42:56.975221 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:43:02 crc kubenswrapper[4922]: I0218 12:43:02.581493 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:43:02 crc kubenswrapper[4922]: I0218 12:43:02.582544 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:43:02 crc kubenswrapper[4922]: I0218 12:43:02.629881 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:43:02 crc kubenswrapper[4922]: I0218 12:43:02.809491 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:43:02 crc kubenswrapper[4922]: I0218 12:43:02.863878 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-clkkl"] Feb 18 12:43:04 crc kubenswrapper[4922]: I0218 12:43:04.779574 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-clkkl" podUID="63579133-a220-43c4-a314-288bc8f38929" containerName="registry-server" containerID="cri-o://a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99" gracePeriod=2 Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.276729 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.369669 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-utilities\") pod \"63579133-a220-43c4-a314-288bc8f38929\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.369819 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xsfp\" (UniqueName: \"kubernetes.io/projected/63579133-a220-43c4-a314-288bc8f38929-kube-api-access-5xsfp\") pod \"63579133-a220-43c4-a314-288bc8f38929\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.369853 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-catalog-content\") pod \"63579133-a220-43c4-a314-288bc8f38929\" (UID: \"63579133-a220-43c4-a314-288bc8f38929\") " Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.370699 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-utilities" (OuterVolumeSpecName: "utilities") pod "63579133-a220-43c4-a314-288bc8f38929" (UID: "63579133-a220-43c4-a314-288bc8f38929"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.375642 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63579133-a220-43c4-a314-288bc8f38929-kube-api-access-5xsfp" (OuterVolumeSpecName: "kube-api-access-5xsfp") pod "63579133-a220-43c4-a314-288bc8f38929" (UID: "63579133-a220-43c4-a314-288bc8f38929"). InnerVolumeSpecName "kube-api-access-5xsfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.471610 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xsfp\" (UniqueName: \"kubernetes.io/projected/63579133-a220-43c4-a314-288bc8f38929-kube-api-access-5xsfp\") on node \"crc\" DevicePath \"\"" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.471651 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.502994 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63579133-a220-43c4-a314-288bc8f38929" (UID: "63579133-a220-43c4-a314-288bc8f38929"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.573470 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63579133-a220-43c4-a314-288bc8f38929-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.792068 4922 generic.go:334] "Generic (PLEG): container finished" podID="63579133-a220-43c4-a314-288bc8f38929" containerID="a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99" exitCode=0 Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.792109 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-clkkl" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.792124 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clkkl" event={"ID":"63579133-a220-43c4-a314-288bc8f38929","Type":"ContainerDied","Data":"a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99"} Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.792164 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-clkkl" event={"ID":"63579133-a220-43c4-a314-288bc8f38929","Type":"ContainerDied","Data":"dd225ac6793d793b66eff5b5c2ffd76a38a665620014f94d05e48321aa82ad41"} Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.792180 4922 scope.go:117] "RemoveContainer" containerID="a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.815410 4922 scope.go:117] "RemoveContainer" containerID="196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221" Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.826080 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-clkkl"] Feb 18 12:43:05 crc kubenswrapper[4922]: I0218 12:43:05.836383 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-clkkl"] Feb 18 12:43:06 crc kubenswrapper[4922]: I0218 12:43:06.074302 4922 scope.go:117] "RemoveContainer" containerID="e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663" Feb 18 12:43:06 crc kubenswrapper[4922]: I0218 12:43:06.321151 4922 scope.go:117] "RemoveContainer" containerID="a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99" Feb 18 12:43:06 crc kubenswrapper[4922]: E0218 12:43:06.321786 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99\": container with ID starting with a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99 not found: ID does not exist" containerID="a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99" Feb 18 12:43:06 crc kubenswrapper[4922]: I0218 12:43:06.321832 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99"} err="failed to get container status \"a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99\": rpc error: code = NotFound desc = could not find container \"a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99\": container with ID starting with a9ca6240108d56cc66c67300c6c6370746b70144120b1ea6d67266a50c71dd99 not found: ID does not exist" Feb 18 12:43:06 crc kubenswrapper[4922]: I0218 12:43:06.321898 4922 scope.go:117] "RemoveContainer" containerID="196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221" Feb 18 12:43:06 crc kubenswrapper[4922]: E0218 12:43:06.322228 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221\": container with ID starting with 196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221 not found: ID does not exist" containerID="196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221" Feb 18 12:43:06 crc kubenswrapper[4922]: I0218 12:43:06.322260 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221"} err="failed to get container status \"196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221\": rpc error: code = NotFound desc = could not find container \"196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221\": container with ID starting with 196ebfcf38d00312eef2d80b81e8ac6fe059cf735f5738aecf10ce0fc9c83221 not found: ID does not exist" Feb 18 12:43:06 crc kubenswrapper[4922]: I0218 12:43:06.322283 4922 scope.go:117] "RemoveContainer" containerID="e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663" Feb 18 12:43:06 crc kubenswrapper[4922]: E0218 12:43:06.322569 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663\": container with ID starting with e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663 not found: ID does not exist" containerID="e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663" Feb 18 12:43:06 crc kubenswrapper[4922]: I0218 12:43:06.322600 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663"} err="failed to get container status \"e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663\": rpc error: code = NotFound desc = could not find container \"e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663\": container with ID starting with e24ebb2ed1b8046bc9f6b6143fa1ec40cc72dffd5ac252d890717327c4135663 not found: ID does not exist" Feb 18 12:43:06 crc kubenswrapper[4922]: I0218 12:43:06.984423 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63579133-a220-43c4-a314-288bc8f38929" path="/var/lib/kubelet/pods/63579133-a220-43c4-a314-288bc8f38929/volumes" Feb 18 12:43:09 crc kubenswrapper[4922]: I0218 12:43:09.973845 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:43:09 crc kubenswrapper[4922]: E0218 12:43:09.975752 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:43:24 crc kubenswrapper[4922]: I0218 12:43:24.974032 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:43:24 crc kubenswrapper[4922]: E0218 12:43:24.974999 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:43:35 crc kubenswrapper[4922]: I0218 12:43:35.974341 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:43:35 crc kubenswrapper[4922]: E0218 12:43:35.975270 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:43:46 crc kubenswrapper[4922]: I0218 12:43:46.974725 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:43:46 crc kubenswrapper[4922]: E0218 12:43:46.975616 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:44:00 crc kubenswrapper[4922]: I0218 12:44:00.972900 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:44:00 crc kubenswrapper[4922]: E0218 12:44:00.973817 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:44:12 crc kubenswrapper[4922]: I0218 12:44:12.973681 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:44:12 crc kubenswrapper[4922]: E0218 12:44:12.975235 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:44:24 crc kubenswrapper[4922]: I0218 12:44:24.973654 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:44:24 crc kubenswrapper[4922]: E0218 12:44:24.974618 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:44:36 crc kubenswrapper[4922]: I0218 12:44:36.972952 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:44:36 crc kubenswrapper[4922]: E0218 12:44:36.973801 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:44:49 crc kubenswrapper[4922]: I0218 12:44:49.973026 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:44:49 crc kubenswrapper[4922]: E0218 12:44:49.973885 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.181419 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6"] Feb 18 12:45:00 crc kubenswrapper[4922]: E0218 12:45:00.183015 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63579133-a220-43c4-a314-288bc8f38929" containerName="registry-server" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.183036 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="63579133-a220-43c4-a314-288bc8f38929" containerName="registry-server" Feb 18 12:45:00 crc kubenswrapper[4922]: E0218 12:45:00.183084 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63579133-a220-43c4-a314-288bc8f38929" containerName="extract-content" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.183091 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="63579133-a220-43c4-a314-288bc8f38929" containerName="extract-content" Feb 18 12:45:00 crc kubenswrapper[4922]: E0218 12:45:00.183114 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63579133-a220-43c4-a314-288bc8f38929" containerName="extract-utilities" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.183123 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="63579133-a220-43c4-a314-288bc8f38929" containerName="extract-utilities" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.183453 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="63579133-a220-43c4-a314-288bc8f38929" containerName="registry-server" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.184538 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.187434 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.187712 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.193235 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6"] Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.314774 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f484e4-9c27-40bd-86e6-774e5b7d6b34-secret-volume\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.315139 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f484e4-9c27-40bd-86e6-774e5b7d6b34-config-volume\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.315249 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7sg\" (UniqueName: \"kubernetes.io/projected/69f484e4-9c27-40bd-86e6-774e5b7d6b34-kube-api-access-gf7sg\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.416826 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf7sg\" (UniqueName: \"kubernetes.io/projected/69f484e4-9c27-40bd-86e6-774e5b7d6b34-kube-api-access-gf7sg\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.417000 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f484e4-9c27-40bd-86e6-774e5b7d6b34-secret-volume\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.417060 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f484e4-9c27-40bd-86e6-774e5b7d6b34-config-volume\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.418624 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f484e4-9c27-40bd-86e6-774e5b7d6b34-config-volume\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.424635 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f484e4-9c27-40bd-86e6-774e5b7d6b34-secret-volume\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.436178 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf7sg\" (UniqueName: \"kubernetes.io/projected/69f484e4-9c27-40bd-86e6-774e5b7d6b34-kube-api-access-gf7sg\") pod \"collect-profiles-29523645-wb6x6\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:00 crc kubenswrapper[4922]: I0218 12:45:00.514484 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:01 crc kubenswrapper[4922]: I0218 12:45:01.031663 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6"] Feb 18 12:45:01 crc kubenswrapper[4922]: I0218 12:45:01.833683 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" event={"ID":"69f484e4-9c27-40bd-86e6-774e5b7d6b34","Type":"ContainerStarted","Data":"caa6534508d40a94ab11fe87cec89638773cf6fb1ccd43b4e9b6095ba98cd870"} Feb 18 12:45:01 crc kubenswrapper[4922]: I0218 12:45:01.834291 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" event={"ID":"69f484e4-9c27-40bd-86e6-774e5b7d6b34","Type":"ContainerStarted","Data":"3535708cc41f15d330adc08d8f9e5ae99755591c5c4890fa3645cc353e975da1"} Feb 18 12:45:01 crc kubenswrapper[4922]: I0218 12:45:01.852004 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" podStartSLOduration=1.851987152 podStartE2EDuration="1.851987152s" podCreationTimestamp="2026-02-18 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:45:01.848912924 +0000 UTC m=+4103.576617004" watchObservedRunningTime="2026-02-18 12:45:01.851987152 +0000 UTC m=+4103.579691232" Feb 18 12:45:02 crc kubenswrapper[4922]: I0218 12:45:02.846786 4922 generic.go:334] "Generic (PLEG): container finished" podID="69f484e4-9c27-40bd-86e6-774e5b7d6b34" containerID="caa6534508d40a94ab11fe87cec89638773cf6fb1ccd43b4e9b6095ba98cd870" exitCode=0 Feb 18 12:45:02 crc kubenswrapper[4922]: I0218 12:45:02.846908 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" event={"ID":"69f484e4-9c27-40bd-86e6-774e5b7d6b34","Type":"ContainerDied","Data":"caa6534508d40a94ab11fe87cec89638773cf6fb1ccd43b4e9b6095ba98cd870"} Feb 18 12:45:02 crc kubenswrapper[4922]: I0218 12:45:02.973693 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:45:02 crc kubenswrapper[4922]: E0218 12:45:02.974630 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.269865 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.400035 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf7sg\" (UniqueName: \"kubernetes.io/projected/69f484e4-9c27-40bd-86e6-774e5b7d6b34-kube-api-access-gf7sg\") pod \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.400097 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f484e4-9c27-40bd-86e6-774e5b7d6b34-config-volume\") pod \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.400250 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f484e4-9c27-40bd-86e6-774e5b7d6b34-secret-volume\") pod \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\" (UID: \"69f484e4-9c27-40bd-86e6-774e5b7d6b34\") " Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.400814 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f484e4-9c27-40bd-86e6-774e5b7d6b34-config-volume" (OuterVolumeSpecName: "config-volume") pod "69f484e4-9c27-40bd-86e6-774e5b7d6b34" (UID: "69f484e4-9c27-40bd-86e6-774e5b7d6b34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.406165 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f484e4-9c27-40bd-86e6-774e5b7d6b34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "69f484e4-9c27-40bd-86e6-774e5b7d6b34" (UID: "69f484e4-9c27-40bd-86e6-774e5b7d6b34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.406289 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f484e4-9c27-40bd-86e6-774e5b7d6b34-kube-api-access-gf7sg" (OuterVolumeSpecName: "kube-api-access-gf7sg") pod "69f484e4-9c27-40bd-86e6-774e5b7d6b34" (UID: "69f484e4-9c27-40bd-86e6-774e5b7d6b34"). InnerVolumeSpecName "kube-api-access-gf7sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.502425 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69f484e4-9c27-40bd-86e6-774e5b7d6b34-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.502460 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf7sg\" (UniqueName: \"kubernetes.io/projected/69f484e4-9c27-40bd-86e6-774e5b7d6b34-kube-api-access-gf7sg\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.502471 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f484e4-9c27-40bd-86e6-774e5b7d6b34-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.868352 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" event={"ID":"69f484e4-9c27-40bd-86e6-774e5b7d6b34","Type":"ContainerDied","Data":"3535708cc41f15d330adc08d8f9e5ae99755591c5c4890fa3645cc353e975da1"} Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.868429 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3535708cc41f15d330adc08d8f9e5ae99755591c5c4890fa3645cc353e975da1" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.868659 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-wb6x6" Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.949099 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97"] Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.957168 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-9hg97"] Feb 18 12:45:04 crc kubenswrapper[4922]: I0218 12:45:04.986123 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee2dabc9-c094-41a8-8efd-7b113f5c634c" path="/var/lib/kubelet/pods/ee2dabc9-c094-41a8-8efd-7b113f5c634c/volumes" Feb 18 12:45:13 crc kubenswrapper[4922]: I0218 12:45:13.833791 4922 scope.go:117] "RemoveContainer" containerID="8bef9aa4b92aba91322be4b15768a495bfe0d2b031bccfdb47f0999ccd8a7508" Feb 18 12:45:17 crc kubenswrapper[4922]: I0218 12:45:17.973939 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:45:17 crc kubenswrapper[4922]: E0218 12:45:17.975955 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:45:32 crc kubenswrapper[4922]: I0218 12:45:32.973100 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:45:32 crc kubenswrapper[4922]: E0218 12:45:32.974066 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.288080 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gsthg"] Feb 18 12:45:38 crc kubenswrapper[4922]: E0218 12:45:38.289011 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f484e4-9c27-40bd-86e6-774e5b7d6b34" containerName="collect-profiles" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.289024 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f484e4-9c27-40bd-86e6-774e5b7d6b34" containerName="collect-profiles" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.289286 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f484e4-9c27-40bd-86e6-774e5b7d6b34" containerName="collect-profiles" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.290593 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.298429 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsthg"] Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.388680 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2d2657-497c-4512-97ff-be630635c1df-catalog-content\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.388758 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2d2657-497c-4512-97ff-be630635c1df-utilities\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.388904 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh589\" (UniqueName: \"kubernetes.io/projected/0c2d2657-497c-4512-97ff-be630635c1df-kube-api-access-nh589\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.490168 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh589\" (UniqueName: \"kubernetes.io/projected/0c2d2657-497c-4512-97ff-be630635c1df-kube-api-access-nh589\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.490307 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2d2657-497c-4512-97ff-be630635c1df-catalog-content\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.490349 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2d2657-497c-4512-97ff-be630635c1df-utilities\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.490873 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2d2657-497c-4512-97ff-be630635c1df-catalog-content\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.490922 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2d2657-497c-4512-97ff-be630635c1df-utilities\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.862962 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh589\" (UniqueName: \"kubernetes.io/projected/0c2d2657-497c-4512-97ff-be630635c1df-kube-api-access-nh589\") pod \"redhat-marketplace-gsthg\" (UID: \"0c2d2657-497c-4512-97ff-be630635c1df\") " pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:38 crc kubenswrapper[4922]: I0218 12:45:38.918250 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:39 crc kubenswrapper[4922]: I0218 12:45:39.574780 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsthg"] Feb 18 12:45:40 crc kubenswrapper[4922]: I0218 12:45:40.176130 4922 generic.go:334] "Generic (PLEG): container finished" podID="0c2d2657-497c-4512-97ff-be630635c1df" containerID="d7f8d5c5fec09a56bccab7f38514244874b62e4137de81a91c7d5b473889adec" exitCode=0 Feb 18 12:45:40 crc kubenswrapper[4922]: I0218 12:45:40.176181 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsthg" event={"ID":"0c2d2657-497c-4512-97ff-be630635c1df","Type":"ContainerDied","Data":"d7f8d5c5fec09a56bccab7f38514244874b62e4137de81a91c7d5b473889adec"} Feb 18 12:45:40 crc kubenswrapper[4922]: I0218 12:45:40.176601 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsthg" event={"ID":"0c2d2657-497c-4512-97ff-be630635c1df","Type":"ContainerStarted","Data":"bcd76dcd25dca3e3cfdb75b80d337acdf50960fd0a2d8689d631660c1968d8d1"} Feb 18 12:45:40 crc kubenswrapper[4922]: I0218 12:45:40.178784 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.477272 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnmx6"] Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.479654 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.487895 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnmx6"] Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.550847 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-utilities\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.550957 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5v8\" (UniqueName: \"kubernetes.io/projected/8c710512-6ce5-40e7-9085-70e8516bb4c2-kube-api-access-js5v8\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.550983 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-catalog-content\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.653184 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-utilities\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.653279 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js5v8\" (UniqueName: \"kubernetes.io/projected/8c710512-6ce5-40e7-9085-70e8516bb4c2-kube-api-access-js5v8\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.653312 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-catalog-content\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.653713 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-utilities\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.653979 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-catalog-content\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.671118 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js5v8\" (UniqueName: \"kubernetes.io/projected/8c710512-6ce5-40e7-9085-70e8516bb4c2-kube-api-access-js5v8\") pod \"certified-operators-tnmx6\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:41 crc kubenswrapper[4922]: I0218 12:45:41.801940 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:42 crc kubenswrapper[4922]: I0218 12:45:42.366780 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnmx6"] Feb 18 12:45:42 crc kubenswrapper[4922]: W0218 12:45:42.367183 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c710512_6ce5_40e7_9085_70e8516bb4c2.slice/crio-749290a5cf2aaed7b05cc10724ddebd9c7dc542bda1f66814d677749f6e69a95 WatchSource:0}: Error finding container 749290a5cf2aaed7b05cc10724ddebd9c7dc542bda1f66814d677749f6e69a95: Status 404 returned error can't find the container with id 749290a5cf2aaed7b05cc10724ddebd9c7dc542bda1f66814d677749f6e69a95 Feb 18 12:45:43 crc kubenswrapper[4922]: I0218 12:45:43.205242 4922 generic.go:334] "Generic (PLEG): container finished" podID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerID="af8772c92370f86d738fd284221d23aa9beb9592dab1e6643aaeed0fa10bef0f" exitCode=0 Feb 18 12:45:43 crc kubenswrapper[4922]: I0218 12:45:43.205317 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnmx6" event={"ID":"8c710512-6ce5-40e7-9085-70e8516bb4c2","Type":"ContainerDied","Data":"af8772c92370f86d738fd284221d23aa9beb9592dab1e6643aaeed0fa10bef0f"} Feb 18 12:45:43 crc kubenswrapper[4922]: I0218 12:45:43.205618 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnmx6" event={"ID":"8c710512-6ce5-40e7-9085-70e8516bb4c2","Type":"ContainerStarted","Data":"749290a5cf2aaed7b05cc10724ddebd9c7dc542bda1f66814d677749f6e69a95"} Feb 18 12:45:43 crc kubenswrapper[4922]: I0218 12:45:43.973731 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:45:44 crc kubenswrapper[4922]: I0218 12:45:44.218402 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnmx6" event={"ID":"8c710512-6ce5-40e7-9085-70e8516bb4c2","Type":"ContainerStarted","Data":"a9eb7565239981ac85c892177039fdb2c36b50119f7e6d6a757e15e00ae601b9"} Feb 18 12:45:44 crc kubenswrapper[4922]: I0218 12:45:44.222075 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"1d138caa9f48794399f333864aee478905e60e95ed598a5f227130509c079fc0"} Feb 18 12:45:46 crc kubenswrapper[4922]: I0218 12:45:46.239869 4922 generic.go:334] "Generic (PLEG): container finished" podID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerID="a9eb7565239981ac85c892177039fdb2c36b50119f7e6d6a757e15e00ae601b9" exitCode=0 Feb 18 12:45:46 crc kubenswrapper[4922]: I0218 12:45:46.239961 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnmx6" event={"ID":"8c710512-6ce5-40e7-9085-70e8516bb4c2","Type":"ContainerDied","Data":"a9eb7565239981ac85c892177039fdb2c36b50119f7e6d6a757e15e00ae601b9"} Feb 18 12:45:49 crc kubenswrapper[4922]: I0218 12:45:49.272884 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnmx6" event={"ID":"8c710512-6ce5-40e7-9085-70e8516bb4c2","Type":"ContainerStarted","Data":"3c1e6e3c93bfe3283d4e4bfd6869e425e9c364ada040ff5a5aaf099ba396f03c"} Feb 18 12:45:49 crc kubenswrapper[4922]: I0218 12:45:49.276463 4922 generic.go:334] "Generic (PLEG): container finished" podID="0c2d2657-497c-4512-97ff-be630635c1df" containerID="e5a1a8b18d8a432f0c72383a517720388154e5174281804915b9ff3434226f96" exitCode=0 Feb 18 12:45:49 crc kubenswrapper[4922]: I0218 12:45:49.276508 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsthg" event={"ID":"0c2d2657-497c-4512-97ff-be630635c1df","Type":"ContainerDied","Data":"e5a1a8b18d8a432f0c72383a517720388154e5174281804915b9ff3434226f96"} Feb 18 12:45:49 crc kubenswrapper[4922]: I0218 12:45:49.296103 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnmx6" podStartSLOduration=3.357383961 podStartE2EDuration="8.296085445s" podCreationTimestamp="2026-02-18 12:45:41 +0000 UTC" firstStartedPulling="2026-02-18 12:45:43.207204529 +0000 UTC m=+4144.934908619" lastFinishedPulling="2026-02-18 12:45:48.145906023 +0000 UTC m=+4149.873610103" observedRunningTime="2026-02-18 12:45:49.294592177 +0000 UTC m=+4151.022296257" watchObservedRunningTime="2026-02-18 12:45:49.296085445 +0000 UTC m=+4151.023789525" Feb 18 12:45:50 crc kubenswrapper[4922]: I0218 12:45:50.286486 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsthg" event={"ID":"0c2d2657-497c-4512-97ff-be630635c1df","Type":"ContainerStarted","Data":"af2124d50d4cd1091518ef7487ec2bd809da05b2c838beb01b80578e23f35293"} Feb 18 12:45:50 crc kubenswrapper[4922]: I0218 12:45:50.306769 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gsthg" podStartSLOduration=2.704796949 podStartE2EDuration="12.306748343s" podCreationTimestamp="2026-02-18 12:45:38 +0000 UTC" firstStartedPulling="2026-02-18 12:45:40.1785515 +0000 UTC m=+4141.906255580" lastFinishedPulling="2026-02-18 12:45:49.780502894 +0000 UTC m=+4151.508206974" observedRunningTime="2026-02-18 12:45:50.303187493 +0000 UTC m=+4152.030891593" watchObservedRunningTime="2026-02-18 12:45:50.306748343 +0000 UTC m=+4152.034452423" Feb 18 12:45:51 crc kubenswrapper[4922]: I0218 12:45:51.802558 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:51 crc kubenswrapper[4922]: I0218 12:45:51.803141 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:51 crc kubenswrapper[4922]: I0218 12:45:51.851953 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:45:58 crc kubenswrapper[4922]: I0218 12:45:58.919278 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:58 crc kubenswrapper[4922]: I0218 12:45:58.919800 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:58 crc kubenswrapper[4922]: I0218 12:45:58.966427 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:59 crc kubenswrapper[4922]: I0218 12:45:59.407581 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gsthg" Feb 18 12:45:59 crc kubenswrapper[4922]: I0218 12:45:59.471197 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsthg"] Feb 18 12:45:59 crc kubenswrapper[4922]: I0218 12:45:59.513671 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bqhb"] Feb 18 12:45:59 crc kubenswrapper[4922]: I0218 12:45:59.513899 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6bqhb" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerName="registry-server" containerID="cri-o://3c30617879326ec9fb7f35dddf404540d84efb8a598a6b6c0ad31087cd8073f1" gracePeriod=2 Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.376350 4922 generic.go:334] "Generic (PLEG): container finished" podID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerID="3c30617879326ec9fb7f35dddf404540d84efb8a598a6b6c0ad31087cd8073f1" exitCode=0 Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.376403 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bqhb" event={"ID":"f50f60ee-09dd-45e2-aab0-384f2ff99b7d","Type":"ContainerDied","Data":"3c30617879326ec9fb7f35dddf404540d84efb8a598a6b6c0ad31087cd8073f1"} Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.682432 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.722935 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q98tn\" (UniqueName: \"kubernetes.io/projected/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-kube-api-access-q98tn\") pod \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.723062 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-utilities\") pod \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.723220 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-catalog-content\") pod \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\" (UID: \"f50f60ee-09dd-45e2-aab0-384f2ff99b7d\") " Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.723829 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-utilities" (OuterVolumeSpecName: "utilities") pod "f50f60ee-09dd-45e2-aab0-384f2ff99b7d" (UID: "f50f60ee-09dd-45e2-aab0-384f2ff99b7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.724278 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.729695 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-kube-api-access-q98tn" (OuterVolumeSpecName: "kube-api-access-q98tn") pod "f50f60ee-09dd-45e2-aab0-384f2ff99b7d" (UID: "f50f60ee-09dd-45e2-aab0-384f2ff99b7d"). InnerVolumeSpecName "kube-api-access-q98tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.750185 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f50f60ee-09dd-45e2-aab0-384f2ff99b7d" (UID: "f50f60ee-09dd-45e2-aab0-384f2ff99b7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.826191 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:00 crc kubenswrapper[4922]: I0218 12:46:00.826233 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q98tn\" (UniqueName: \"kubernetes.io/projected/f50f60ee-09dd-45e2-aab0-384f2ff99b7d-kube-api-access-q98tn\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:01 crc kubenswrapper[4922]: I0218 12:46:01.393779 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bqhb" Feb 18 12:46:01 crc kubenswrapper[4922]: I0218 12:46:01.393765 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bqhb" event={"ID":"f50f60ee-09dd-45e2-aab0-384f2ff99b7d","Type":"ContainerDied","Data":"81c45efcdac9362802de58dd27bf893197de86f7ebd3df5f164f632b261368cc"} Feb 18 12:46:01 crc kubenswrapper[4922]: I0218 12:46:01.393927 4922 scope.go:117] "RemoveContainer" containerID="3c30617879326ec9fb7f35dddf404540d84efb8a598a6b6c0ad31087cd8073f1" Feb 18 12:46:01 crc kubenswrapper[4922]: I0218 12:46:01.418541 4922 scope.go:117] "RemoveContainer" containerID="dbaac2d048b1bf28c0a544d56094e4190ceeb213bd8b9b6ab6c58e5f9ca98a21" Feb 18 12:46:01 crc kubenswrapper[4922]: I0218 12:46:01.420065 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bqhb"] Feb 18 12:46:01 crc kubenswrapper[4922]: I0218 12:46:01.429975 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bqhb"] Feb 18 12:46:01 crc kubenswrapper[4922]: I0218 12:46:01.445332 4922 scope.go:117] "RemoveContainer" containerID="66cf4f94781e4ece125829fc4a1a5acf7beefaa52399d35c3cf834cf5448be6c" Feb 18 12:46:01 crc kubenswrapper[4922]: I0218 12:46:01.851028 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:46:02 crc kubenswrapper[4922]: I0218 12:46:02.982987 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" path="/var/lib/kubelet/pods/f50f60ee-09dd-45e2-aab0-384f2ff99b7d/volumes" Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.208856 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnmx6"] Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.209349 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tnmx6" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerName="registry-server" containerID="cri-o://3c1e6e3c93bfe3283d4e4bfd6869e425e9c364ada040ff5a5aaf099ba396f03c" gracePeriod=2 Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.424319 4922 generic.go:334] "Generic (PLEG): container finished" podID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerID="3c1e6e3c93bfe3283d4e4bfd6869e425e9c364ada040ff5a5aaf099ba396f03c" exitCode=0 Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.424376 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnmx6" event={"ID":"8c710512-6ce5-40e7-9085-70e8516bb4c2","Type":"ContainerDied","Data":"3c1e6e3c93bfe3283d4e4bfd6869e425e9c364ada040ff5a5aaf099ba396f03c"} Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.724014 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.806992 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-utilities\") pod \"8c710512-6ce5-40e7-9085-70e8516bb4c2\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.807061 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-catalog-content\") pod \"8c710512-6ce5-40e7-9085-70e8516bb4c2\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.807183 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js5v8\" (UniqueName: \"kubernetes.io/projected/8c710512-6ce5-40e7-9085-70e8516bb4c2-kube-api-access-js5v8\") pod \"8c710512-6ce5-40e7-9085-70e8516bb4c2\" (UID: \"8c710512-6ce5-40e7-9085-70e8516bb4c2\") " Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.807526 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-utilities" (OuterVolumeSpecName: "utilities") pod "8c710512-6ce5-40e7-9085-70e8516bb4c2" (UID: "8c710512-6ce5-40e7-9085-70e8516bb4c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.807859 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.814602 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c710512-6ce5-40e7-9085-70e8516bb4c2-kube-api-access-js5v8" (OuterVolumeSpecName: "kube-api-access-js5v8") pod "8c710512-6ce5-40e7-9085-70e8516bb4c2" (UID: "8c710512-6ce5-40e7-9085-70e8516bb4c2"). InnerVolumeSpecName "kube-api-access-js5v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.863398 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c710512-6ce5-40e7-9085-70e8516bb4c2" (UID: "8c710512-6ce5-40e7-9085-70e8516bb4c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.909045 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c710512-6ce5-40e7-9085-70e8516bb4c2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:04 crc kubenswrapper[4922]: I0218 12:46:04.909074 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js5v8\" (UniqueName: \"kubernetes.io/projected/8c710512-6ce5-40e7-9085-70e8516bb4c2-kube-api-access-js5v8\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:05 crc kubenswrapper[4922]: I0218 12:46:05.436204 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnmx6" event={"ID":"8c710512-6ce5-40e7-9085-70e8516bb4c2","Type":"ContainerDied","Data":"749290a5cf2aaed7b05cc10724ddebd9c7dc542bda1f66814d677749f6e69a95"} Feb 18 12:46:05 crc kubenswrapper[4922]: I0218 12:46:05.436560 4922 scope.go:117] "RemoveContainer" containerID="3c1e6e3c93bfe3283d4e4bfd6869e425e9c364ada040ff5a5aaf099ba396f03c" Feb 18 12:46:05 crc kubenswrapper[4922]: I0218 12:46:05.436284 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnmx6" Feb 18 12:46:05 crc kubenswrapper[4922]: I0218 12:46:05.456798 4922 scope.go:117] "RemoveContainer" containerID="a9eb7565239981ac85c892177039fdb2c36b50119f7e6d6a757e15e00ae601b9" Feb 18 12:46:05 crc kubenswrapper[4922]: I0218 12:46:05.456799 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnmx6"] Feb 18 12:46:05 crc kubenswrapper[4922]: I0218 12:46:05.465228 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tnmx6"] Feb 18 12:46:05 crc kubenswrapper[4922]: I0218 12:46:05.476578 4922 scope.go:117] "RemoveContainer" containerID="af8772c92370f86d738fd284221d23aa9beb9592dab1e6643aaeed0fa10bef0f" Feb 18 12:46:06 crc kubenswrapper[4922]: I0218 12:46:06.990566 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" path="/var/lib/kubelet/pods/8c710512-6ce5-40e7-9085-70e8516bb4c2/volumes" Feb 18 12:48:09 crc kubenswrapper[4922]: I0218 12:48:09.807624 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:48:09 crc kubenswrapper[4922]: I0218 12:48:09.808764 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:48:39 crc kubenswrapper[4922]: I0218 12:48:39.807082 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:48:39 crc kubenswrapper[4922]: I0218 12:48:39.807680 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.520578 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bx9gn"] Feb 18 12:48:40 crc kubenswrapper[4922]: E0218 12:48:40.521436 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerName="extract-utilities" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.521464 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerName="extract-utilities" Feb 18 12:48:40 crc kubenswrapper[4922]: E0218 12:48:40.521483 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerName="registry-server" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.521491 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerName="registry-server" Feb 18 12:48:40 crc kubenswrapper[4922]: E0218 12:48:40.521503 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerName="extract-content" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.521509 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerName="extract-content" Feb 18 12:48:40 crc kubenswrapper[4922]: E0218 12:48:40.521547 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerName="extract-utilities" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.521555 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerName="extract-utilities" Feb 18 12:48:40 crc kubenswrapper[4922]: E0218 12:48:40.521564 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerName="extract-content" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.521571 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerName="extract-content" Feb 18 12:48:40 crc kubenswrapper[4922]: E0218 12:48:40.521584 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerName="registry-server" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.521592 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerName="registry-server" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.521828 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50f60ee-09dd-45e2-aab0-384f2ff99b7d" containerName="registry-server" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.521866 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c710512-6ce5-40e7-9085-70e8516bb4c2" containerName="registry-server" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.523655 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.532766 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bx9gn"] Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.648838 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-catalog-content\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.648939 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcql8\" (UniqueName: \"kubernetes.io/projected/d2259fa1-5d48-4af4-95a8-248656995677-kube-api-access-kcql8\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.648958 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-utilities\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.751462 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcql8\" (UniqueName: \"kubernetes.io/projected/d2259fa1-5d48-4af4-95a8-248656995677-kube-api-access-kcql8\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.751507 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-utilities\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.751650 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-catalog-content\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.752175 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-catalog-content\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.752393 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-utilities\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.773549 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcql8\" (UniqueName: \"kubernetes.io/projected/d2259fa1-5d48-4af4-95a8-248656995677-kube-api-access-kcql8\") pod \"community-operators-bx9gn\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:40 crc kubenswrapper[4922]: I0218 12:48:40.851034 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:41 crc kubenswrapper[4922]: I0218 12:48:41.388192 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bx9gn"] Feb 18 12:48:41 crc kubenswrapper[4922]: I0218 12:48:41.778913 4922 generic.go:334] "Generic (PLEG): container finished" podID="d2259fa1-5d48-4af4-95a8-248656995677" containerID="44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab" exitCode=0 Feb 18 12:48:41 crc kubenswrapper[4922]: I0218 12:48:41.778970 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx9gn" event={"ID":"d2259fa1-5d48-4af4-95a8-248656995677","Type":"ContainerDied","Data":"44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab"} Feb 18 12:48:41 crc kubenswrapper[4922]: I0218 12:48:41.779001 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx9gn" event={"ID":"d2259fa1-5d48-4af4-95a8-248656995677","Type":"ContainerStarted","Data":"060c2cae1327bf0e518d54b6060282df30e58a89ab4033848f08f53f8eb73e97"} Feb 18 12:48:43 crc kubenswrapper[4922]: I0218 12:48:43.798470 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx9gn" event={"ID":"d2259fa1-5d48-4af4-95a8-248656995677","Type":"ContainerStarted","Data":"a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73"} Feb 18 12:48:44 crc kubenswrapper[4922]: I0218 12:48:44.810687 4922 generic.go:334] "Generic (PLEG): container finished" podID="d2259fa1-5d48-4af4-95a8-248656995677" containerID="a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73" exitCode=0 Feb 18 12:48:44 crc kubenswrapper[4922]: I0218 12:48:44.810794 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx9gn" event={"ID":"d2259fa1-5d48-4af4-95a8-248656995677","Type":"ContainerDied","Data":"a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73"} Feb 18 12:48:45 crc kubenswrapper[4922]: I0218 12:48:45.822229 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx9gn" event={"ID":"d2259fa1-5d48-4af4-95a8-248656995677","Type":"ContainerStarted","Data":"19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062"} Feb 18 12:48:45 crc kubenswrapper[4922]: I0218 12:48:45.838670 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bx9gn" podStartSLOduration=2.405849348 podStartE2EDuration="5.838651744s" podCreationTimestamp="2026-02-18 12:48:40 +0000 UTC" firstStartedPulling="2026-02-18 12:48:41.780805327 +0000 UTC m=+4323.508509407" lastFinishedPulling="2026-02-18 12:48:45.213607723 +0000 UTC m=+4326.941311803" observedRunningTime="2026-02-18 12:48:45.83731857 +0000 UTC m=+4327.565022650" watchObservedRunningTime="2026-02-18 12:48:45.838651744 +0000 UTC m=+4327.566355824" Feb 18 12:48:50 crc kubenswrapper[4922]: I0218 12:48:50.851233 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:50 crc kubenswrapper[4922]: I0218 12:48:50.851669 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:50 crc kubenswrapper[4922]: I0218 12:48:50.901831 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:50 crc kubenswrapper[4922]: I0218 12:48:50.952719 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:51 crc kubenswrapper[4922]: I0218 12:48:51.145083 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bx9gn"] Feb 18 12:48:52 crc kubenswrapper[4922]: I0218 12:48:52.880447 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bx9gn" podUID="d2259fa1-5d48-4af4-95a8-248656995677" containerName="registry-server" containerID="cri-o://19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062" gracePeriod=2 Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.766843 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.891517 4922 generic.go:334] "Generic (PLEG): container finished" podID="d2259fa1-5d48-4af4-95a8-248656995677" containerID="19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062" exitCode=0 Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.891573 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx9gn" Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.891576 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx9gn" event={"ID":"d2259fa1-5d48-4af4-95a8-248656995677","Type":"ContainerDied","Data":"19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062"} Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.891710 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx9gn" event={"ID":"d2259fa1-5d48-4af4-95a8-248656995677","Type":"ContainerDied","Data":"060c2cae1327bf0e518d54b6060282df30e58a89ab4033848f08f53f8eb73e97"} Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.891732 4922 scope.go:117] "RemoveContainer" containerID="19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062" Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.913097 4922 scope.go:117] "RemoveContainer" containerID="a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73" Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.930088 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-catalog-content\") pod \"d2259fa1-5d48-4af4-95a8-248656995677\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.930294 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcql8\" (UniqueName: \"kubernetes.io/projected/d2259fa1-5d48-4af4-95a8-248656995677-kube-api-access-kcql8\") pod \"d2259fa1-5d48-4af4-95a8-248656995677\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.930532 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-utilities\") pod \"d2259fa1-5d48-4af4-95a8-248656995677\" (UID: \"d2259fa1-5d48-4af4-95a8-248656995677\") " Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.931378 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-utilities" (OuterVolumeSpecName: "utilities") pod "d2259fa1-5d48-4af4-95a8-248656995677" (UID: "d2259fa1-5d48-4af4-95a8-248656995677"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.931809 4922 scope.go:117] "RemoveContainer" containerID="44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab" Feb 18 12:48:53 crc kubenswrapper[4922]: I0218 12:48:53.937199 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2259fa1-5d48-4af4-95a8-248656995677-kube-api-access-kcql8" (OuterVolumeSpecName: "kube-api-access-kcql8") pod "d2259fa1-5d48-4af4-95a8-248656995677" (UID: "d2259fa1-5d48-4af4-95a8-248656995677"). InnerVolumeSpecName "kube-api-access-kcql8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.028123 4922 scope.go:117] "RemoveContainer" containerID="19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062" Feb 18 12:48:54 crc kubenswrapper[4922]: E0218 12:48:54.028682 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062\": container with ID starting with 19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062 not found: ID does not exist" containerID="19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.028729 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062"} err="failed to get container status \"19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062\": rpc error: code = NotFound desc = could not find container \"19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062\": container with ID starting with 19bfb4fdec197ffadd7986d5ef7545670ca979803a6cf0429378c31163bc6062 not found: ID does not exist" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.028754 4922 scope.go:117] "RemoveContainer" containerID="a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73" Feb 18 12:48:54 crc kubenswrapper[4922]: E0218 12:48:54.029169 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73\": container with ID starting with a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73 not found: ID does not exist" containerID="a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.029302 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73"} err="failed to get container status \"a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73\": rpc error: code = NotFound desc = could not find container \"a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73\": container with ID starting with a916979ab662e1964d2095283461a74d2e3cde5e3b4cbc64f91e5fa317792e73 not found: ID does not exist" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.029431 4922 scope.go:117] "RemoveContainer" containerID="44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab" Feb 18 12:48:54 crc kubenswrapper[4922]: E0218 12:48:54.029993 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab\": container with ID starting with 44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab not found: ID does not exist" containerID="44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.030091 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab"} err="failed to get container status \"44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab\": rpc error: code = NotFound desc = could not find container \"44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab\": container with ID starting with 44b26d0885cb416ce09fd2b10a847716b4e1607e6001dc3d68e30e390801aaab not found: ID does not exist" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.032377 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcql8\" (UniqueName: \"kubernetes.io/projected/d2259fa1-5d48-4af4-95a8-248656995677-kube-api-access-kcql8\") on node \"crc\" DevicePath \"\"" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.032401 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.264464 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2259fa1-5d48-4af4-95a8-248656995677" (UID: "d2259fa1-5d48-4af4-95a8-248656995677"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.339836 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2259fa1-5d48-4af4-95a8-248656995677-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.528471 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bx9gn"] Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.540640 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bx9gn"] Feb 18 12:48:54 crc kubenswrapper[4922]: I0218 12:48:54.983501 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2259fa1-5d48-4af4-95a8-248656995677" path="/var/lib/kubelet/pods/d2259fa1-5d48-4af4-95a8-248656995677/volumes" Feb 18 12:49:09 crc kubenswrapper[4922]: I0218 12:49:09.807890 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:49:09 crc kubenswrapper[4922]: I0218 12:49:09.808495 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:49:09 crc kubenswrapper[4922]: I0218 12:49:09.808549 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:49:09 crc kubenswrapper[4922]: I0218 12:49:09.809788 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d138caa9f48794399f333864aee478905e60e95ed598a5f227130509c079fc0"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:49:09 crc kubenswrapper[4922]: I0218 12:49:09.809903 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://1d138caa9f48794399f333864aee478905e60e95ed598a5f227130509c079fc0" gracePeriod=600 Feb 18 12:49:10 crc kubenswrapper[4922]: I0218 12:49:10.039440 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="1d138caa9f48794399f333864aee478905e60e95ed598a5f227130509c079fc0" exitCode=0 Feb 18 12:49:10 crc kubenswrapper[4922]: I0218 12:49:10.039580 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"1d138caa9f48794399f333864aee478905e60e95ed598a5f227130509c079fc0"} Feb 18 12:49:10 crc kubenswrapper[4922]: I0218 12:49:10.040157 4922 scope.go:117] "RemoveContainer" containerID="da2f31195c0d03b655b84a936da99ef08c103ba9b2f411b408e8d74bdaa66a62" Feb 18 12:49:11 crc kubenswrapper[4922]: I0218 12:49:11.051410 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324"} Feb 18 12:51:39 crc kubenswrapper[4922]: I0218 12:51:39.807771 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:51:39 crc kubenswrapper[4922]: I0218 12:51:39.808265 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:52:09 crc kubenswrapper[4922]: I0218 12:52:09.807032 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:52:09 crc kubenswrapper[4922]: I0218 12:52:09.808757 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:52:39 crc kubenswrapper[4922]: I0218 12:52:39.807984 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:52:39 crc kubenswrapper[4922]: I0218 12:52:39.808724 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:52:39 crc kubenswrapper[4922]: I0218 12:52:39.808790 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 12:52:39 crc kubenswrapper[4922]: I0218 12:52:39.809875 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:52:39 crc kubenswrapper[4922]: I0218 12:52:39.809944 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" gracePeriod=600 Feb 18 12:52:39 crc kubenswrapper[4922]: E0218 12:52:39.942934 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:52:40 crc kubenswrapper[4922]: I0218 12:52:40.932324 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" exitCode=0 Feb 18 12:52:40 crc kubenswrapper[4922]: I0218 12:52:40.932539 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324"} Feb 18 12:52:40 crc kubenswrapper[4922]: I0218 12:52:40.932939 4922 scope.go:117] "RemoveContainer" containerID="1d138caa9f48794399f333864aee478905e60e95ed598a5f227130509c079fc0" Feb 18 12:52:40 crc kubenswrapper[4922]: I0218 12:52:40.934176 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:52:40 crc kubenswrapper[4922]: E0218 12:52:40.934764 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:52:54 crc kubenswrapper[4922]: I0218 12:52:54.973376 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:52:54 crc kubenswrapper[4922]: E0218 12:52:54.974202 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:53:07 crc kubenswrapper[4922]: I0218 12:53:07.973119 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:53:07 crc kubenswrapper[4922]: E0218 12:53:07.974120 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:53:21 crc kubenswrapper[4922]: I0218 12:53:21.979287 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:53:21 crc kubenswrapper[4922]: E0218 12:53:21.979977 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:53:35 crc kubenswrapper[4922]: I0218 12:53:35.972826 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:53:35 crc kubenswrapper[4922]: E0218 12:53:35.973753 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.725017 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqcnv"] Feb 18 12:53:45 crc kubenswrapper[4922]: E0218 12:53:45.777170 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2259fa1-5d48-4af4-95a8-248656995677" containerName="extract-utilities" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.777210 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2259fa1-5d48-4af4-95a8-248656995677" containerName="extract-utilities" Feb 18 12:53:45 crc kubenswrapper[4922]: E0218 12:53:45.777241 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2259fa1-5d48-4af4-95a8-248656995677" containerName="extract-content" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.777254 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2259fa1-5d48-4af4-95a8-248656995677" containerName="extract-content" Feb 18 12:53:45 crc kubenswrapper[4922]: E0218 12:53:45.777265 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2259fa1-5d48-4af4-95a8-248656995677" containerName="registry-server" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.777273 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2259fa1-5d48-4af4-95a8-248656995677" containerName="registry-server" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.789434 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2259fa1-5d48-4af4-95a8-248656995677" containerName="registry-server" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.821741 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqcnv"] Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.821879 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.960648 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-catalog-content\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.960945 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-utilities\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:45 crc kubenswrapper[4922]: I0218 12:53:45.961485 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dndhx\" (UniqueName: \"kubernetes.io/projected/698ffa93-9036-42f0-9b8f-b40d8bd89799-kube-api-access-dndhx\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:46 crc kubenswrapper[4922]: I0218 12:53:46.063021 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dndhx\" (UniqueName: \"kubernetes.io/projected/698ffa93-9036-42f0-9b8f-b40d8bd89799-kube-api-access-dndhx\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:46 crc kubenswrapper[4922]: I0218 12:53:46.063074 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-catalog-content\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:46 crc kubenswrapper[4922]: I0218 12:53:46.063143 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-utilities\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:46 crc kubenswrapper[4922]: I0218 12:53:46.063738 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-utilities\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:46 crc kubenswrapper[4922]: I0218 12:53:46.064259 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-catalog-content\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:46 crc kubenswrapper[4922]: I0218 12:53:46.082538 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dndhx\" (UniqueName: \"kubernetes.io/projected/698ffa93-9036-42f0-9b8f-b40d8bd89799-kube-api-access-dndhx\") pod \"redhat-operators-hqcnv\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:46 crc kubenswrapper[4922]: I0218 12:53:46.147164 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:46 crc kubenswrapper[4922]: I0218 12:53:46.624532 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqcnv"] Feb 18 12:53:47 crc kubenswrapper[4922]: I0218 12:53:47.561811 4922 generic.go:334] "Generic (PLEG): container finished" podID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerID="1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391" exitCode=0 Feb 18 12:53:47 crc kubenswrapper[4922]: I0218 12:53:47.561892 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqcnv" event={"ID":"698ffa93-9036-42f0-9b8f-b40d8bd89799","Type":"ContainerDied","Data":"1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391"} Feb 18 12:53:47 crc kubenswrapper[4922]: I0218 12:53:47.562139 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqcnv" event={"ID":"698ffa93-9036-42f0-9b8f-b40d8bd89799","Type":"ContainerStarted","Data":"30376a75b17d5eebb1bdba8398ea7875600a6f47c29f91d225604e0257fff402"} Feb 18 12:53:47 crc kubenswrapper[4922]: I0218 12:53:47.566225 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:53:48 crc kubenswrapper[4922]: I0218 12:53:48.980830 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:53:48 crc kubenswrapper[4922]: E0218 12:53:48.981405 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:53:49 crc kubenswrapper[4922]: I0218 12:53:49.582671 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqcnv" event={"ID":"698ffa93-9036-42f0-9b8f-b40d8bd89799","Type":"ContainerStarted","Data":"e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b"} Feb 18 12:53:53 crc kubenswrapper[4922]: I0218 12:53:53.618752 4922 generic.go:334] "Generic (PLEG): container finished" podID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerID="e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b" exitCode=0 Feb 18 12:53:53 crc kubenswrapper[4922]: I0218 12:53:53.618877 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqcnv" event={"ID":"698ffa93-9036-42f0-9b8f-b40d8bd89799","Type":"ContainerDied","Data":"e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b"} Feb 18 12:53:54 crc kubenswrapper[4922]: I0218 12:53:54.629844 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqcnv" event={"ID":"698ffa93-9036-42f0-9b8f-b40d8bd89799","Type":"ContainerStarted","Data":"4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8"} Feb 18 12:53:54 crc kubenswrapper[4922]: I0218 12:53:54.650911 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqcnv" podStartSLOduration=3.189458545 podStartE2EDuration="9.650887212s" podCreationTimestamp="2026-02-18 12:53:45 +0000 UTC" firstStartedPulling="2026-02-18 12:53:47.565926009 +0000 UTC m=+4629.293630089" lastFinishedPulling="2026-02-18 12:53:54.027354676 +0000 UTC m=+4635.755058756" observedRunningTime="2026-02-18 12:53:54.649533858 +0000 UTC m=+4636.377237938" watchObservedRunningTime="2026-02-18 12:53:54.650887212 +0000 UTC m=+4636.378591292" Feb 18 12:53:56 crc kubenswrapper[4922]: I0218 12:53:56.147877 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:56 crc kubenswrapper[4922]: I0218 12:53:56.148190 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:53:57 crc kubenswrapper[4922]: I0218 12:53:57.193834 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqcnv" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="registry-server" probeResult="failure" output=< Feb 18 12:53:57 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:53:57 crc kubenswrapper[4922]: > Feb 18 12:54:02 crc kubenswrapper[4922]: I0218 12:54:02.972915 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:54:02 crc kubenswrapper[4922]: E0218 12:54:02.973940 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:54:07 crc kubenswrapper[4922]: I0218 12:54:07.195151 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqcnv" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="registry-server" probeResult="failure" output=< Feb 18 12:54:07 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 12:54:07 crc kubenswrapper[4922]: > Feb 18 12:54:14 crc kubenswrapper[4922]: I0218 12:54:14.972680 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:54:14 crc kubenswrapper[4922]: E0218 12:54:14.973533 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:54:16 crc kubenswrapper[4922]: I0218 12:54:16.198526 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:54:16 crc kubenswrapper[4922]: I0218 12:54:16.259050 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:54:16 crc kubenswrapper[4922]: I0218 12:54:16.925613 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqcnv"] Feb 18 12:54:17 crc kubenswrapper[4922]: I0218 12:54:17.833568 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqcnv" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="registry-server" containerID="cri-o://4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8" gracePeriod=2 Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.318557 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.453543 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-catalog-content\") pod \"698ffa93-9036-42f0-9b8f-b40d8bd89799\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.453848 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dndhx\" (UniqueName: \"kubernetes.io/projected/698ffa93-9036-42f0-9b8f-b40d8bd89799-kube-api-access-dndhx\") pod \"698ffa93-9036-42f0-9b8f-b40d8bd89799\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.453886 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-utilities\") pod \"698ffa93-9036-42f0-9b8f-b40d8bd89799\" (UID: \"698ffa93-9036-42f0-9b8f-b40d8bd89799\") " Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.454837 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-utilities" (OuterVolumeSpecName: "utilities") pod "698ffa93-9036-42f0-9b8f-b40d8bd89799" (UID: "698ffa93-9036-42f0-9b8f-b40d8bd89799"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.461739 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698ffa93-9036-42f0-9b8f-b40d8bd89799-kube-api-access-dndhx" (OuterVolumeSpecName: "kube-api-access-dndhx") pod "698ffa93-9036-42f0-9b8f-b40d8bd89799" (UID: "698ffa93-9036-42f0-9b8f-b40d8bd89799"). InnerVolumeSpecName "kube-api-access-dndhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.557019 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dndhx\" (UniqueName: \"kubernetes.io/projected/698ffa93-9036-42f0-9b8f-b40d8bd89799-kube-api-access-dndhx\") on node \"crc\" DevicePath \"\"" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.557065 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.599242 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "698ffa93-9036-42f0-9b8f-b40d8bd89799" (UID: "698ffa93-9036-42f0-9b8f-b40d8bd89799"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.660044 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/698ffa93-9036-42f0-9b8f-b40d8bd89799-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.845946 4922 generic.go:334] "Generic (PLEG): container finished" podID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerID="4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8" exitCode=0 Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.846015 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqcnv" event={"ID":"698ffa93-9036-42f0-9b8f-b40d8bd89799","Type":"ContainerDied","Data":"4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8"} Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.846097 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqcnv" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.846129 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqcnv" event={"ID":"698ffa93-9036-42f0-9b8f-b40d8bd89799","Type":"ContainerDied","Data":"30376a75b17d5eebb1bdba8398ea7875600a6f47c29f91d225604e0257fff402"} Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.846156 4922 scope.go:117] "RemoveContainer" containerID="4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.883512 4922 scope.go:117] "RemoveContainer" containerID="e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.894743 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqcnv"] Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.908083 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqcnv"] Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.914132 4922 scope.go:117] "RemoveContainer" containerID="1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.953583 4922 scope.go:117] "RemoveContainer" containerID="4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8" Feb 18 12:54:18 crc kubenswrapper[4922]: E0218 12:54:18.954088 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8\": container with ID starting with 4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8 not found: ID does not exist" containerID="4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.954119 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8"} err="failed to get container status \"4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8\": rpc error: code = NotFound desc = could not find container \"4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8\": container with ID starting with 4e2f3e2ccd8d9fcdf0bcd4fedb1e5412f66508d41fddbbf46882aace021eefc8 not found: ID does not exist" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.954147 4922 scope.go:117] "RemoveContainer" containerID="e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b" Feb 18 12:54:18 crc kubenswrapper[4922]: E0218 12:54:18.954551 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b\": container with ID starting with e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b not found: ID does not exist" containerID="e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.954605 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b"} err="failed to get container status \"e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b\": rpc error: code = NotFound desc = could not find container \"e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b\": container with ID starting with e75c9b2ab15b8a47c339719f561374dd3ec0669fefe78913caecbb8e3f4e763b not found: ID does not exist" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.954638 4922 scope.go:117] "RemoveContainer" containerID="1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391" Feb 18 12:54:18 crc kubenswrapper[4922]: E0218 12:54:18.955005 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391\": container with ID starting with 1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391 not found: ID does not exist" containerID="1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.955062 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391"} err="failed to get container status \"1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391\": rpc error: code = NotFound desc = could not find container \"1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391\": container with ID starting with 1ffb8a877d9b78b85d9fa2d690dde256e6eeb627bd355b242bda0e6e5edf9391 not found: ID does not exist" Feb 18 12:54:18 crc kubenswrapper[4922]: I0218 12:54:18.989777 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" path="/var/lib/kubelet/pods/698ffa93-9036-42f0-9b8f-b40d8bd89799/volumes" Feb 18 12:54:25 crc kubenswrapper[4922]: I0218 12:54:25.973595 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:54:25 crc kubenswrapper[4922]: E0218 12:54:25.974213 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:54:37 crc kubenswrapper[4922]: I0218 12:54:37.973812 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:54:37 crc kubenswrapper[4922]: E0218 12:54:37.974658 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:54:51 crc kubenswrapper[4922]: I0218 12:54:51.973642 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:54:51 crc kubenswrapper[4922]: E0218 12:54:51.974561 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:55:06 crc kubenswrapper[4922]: I0218 12:55:06.974518 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:55:06 crc kubenswrapper[4922]: E0218 12:55:06.975479 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:55:20 crc kubenswrapper[4922]: I0218 12:55:20.973319 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:55:20 crc kubenswrapper[4922]: E0218 12:55:20.974229 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:55:33 crc kubenswrapper[4922]: I0218 12:55:33.973142 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:55:33 crc kubenswrapper[4922]: E0218 12:55:33.974088 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:55:48 crc kubenswrapper[4922]: I0218 12:55:48.982048 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:55:48 crc kubenswrapper[4922]: E0218 12:55:48.984587 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:56:03 crc kubenswrapper[4922]: I0218 12:56:03.974664 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:56:03 crc kubenswrapper[4922]: E0218 12:56:03.975983 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:56:17 crc kubenswrapper[4922]: I0218 12:56:17.973345 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:56:17 crc kubenswrapper[4922]: E0218 12:56:17.974276 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:56:30 crc kubenswrapper[4922]: I0218 12:56:30.973497 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:56:30 crc kubenswrapper[4922]: E0218 12:56:30.974699 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.300809 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qrs5h"] Feb 18 12:56:33 crc kubenswrapper[4922]: E0218 12:56:33.301678 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="extract-utilities" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.301694 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="extract-utilities" Feb 18 12:56:33 crc kubenswrapper[4922]: E0218 12:56:33.301711 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="extract-content" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.301718 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="extract-content" Feb 18 12:56:33 crc kubenswrapper[4922]: E0218 12:56:33.301736 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="registry-server" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.301745 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="registry-server" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.301993 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="698ffa93-9036-42f0-9b8f-b40d8bd89799" containerName="registry-server" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.303722 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.322110 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qrs5h"] Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.454032 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-utilities\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.454182 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-catalog-content\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.454311 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ms4t\" (UniqueName: \"kubernetes.io/projected/94c125c3-32cd-4f33-b450-40c8cc7eacae-kube-api-access-9ms4t\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.556637 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ms4t\" (UniqueName: \"kubernetes.io/projected/94c125c3-32cd-4f33-b450-40c8cc7eacae-kube-api-access-9ms4t\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.557558 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-utilities\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.557822 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-catalog-content\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.558532 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-catalog-content\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.558650 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-utilities\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.578933 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ms4t\" (UniqueName: \"kubernetes.io/projected/94c125c3-32cd-4f33-b450-40c8cc7eacae-kube-api-access-9ms4t\") pod \"certified-operators-qrs5h\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:33 crc kubenswrapper[4922]: I0218 12:56:33.630985 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:34 crc kubenswrapper[4922]: I0218 12:56:34.242604 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qrs5h"] Feb 18 12:56:35 crc kubenswrapper[4922]: I0218 12:56:35.158257 4922 generic.go:334] "Generic (PLEG): container finished" podID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerID="08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632" exitCode=0 Feb 18 12:56:35 crc kubenswrapper[4922]: I0218 12:56:35.158463 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrs5h" event={"ID":"94c125c3-32cd-4f33-b450-40c8cc7eacae","Type":"ContainerDied","Data":"08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632"} Feb 18 12:56:35 crc kubenswrapper[4922]: I0218 12:56:35.158633 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrs5h" event={"ID":"94c125c3-32cd-4f33-b450-40c8cc7eacae","Type":"ContainerStarted","Data":"7c62826b0f0334c8b0aa3d37b0faacd0446320f8a887d70e52e623149d73d432"} Feb 18 12:56:36 crc kubenswrapper[4922]: I0218 12:56:36.172220 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrs5h" event={"ID":"94c125c3-32cd-4f33-b450-40c8cc7eacae","Type":"ContainerStarted","Data":"b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9"} Feb 18 12:56:38 crc kubenswrapper[4922]: I0218 12:56:38.194937 4922 generic.go:334] "Generic (PLEG): container finished" podID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerID="b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9" exitCode=0 Feb 18 12:56:38 crc kubenswrapper[4922]: I0218 12:56:38.194976 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrs5h" event={"ID":"94c125c3-32cd-4f33-b450-40c8cc7eacae","Type":"ContainerDied","Data":"b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9"} Feb 18 12:56:40 crc kubenswrapper[4922]: I0218 12:56:40.218754 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrs5h" event={"ID":"94c125c3-32cd-4f33-b450-40c8cc7eacae","Type":"ContainerStarted","Data":"66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b"} Feb 18 12:56:40 crc kubenswrapper[4922]: I0218 12:56:40.237083 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qrs5h" podStartSLOduration=2.796039842 podStartE2EDuration="7.237064262s" podCreationTimestamp="2026-02-18 12:56:33 +0000 UTC" firstStartedPulling="2026-02-18 12:56:35.161472215 +0000 UTC m=+4796.889176295" lastFinishedPulling="2026-02-18 12:56:39.602496635 +0000 UTC m=+4801.330200715" observedRunningTime="2026-02-18 12:56:40.235212265 +0000 UTC m=+4801.962916355" watchObservedRunningTime="2026-02-18 12:56:40.237064262 +0000 UTC m=+4801.964768362" Feb 18 12:56:43 crc kubenswrapper[4922]: I0218 12:56:43.631869 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:43 crc kubenswrapper[4922]: I0218 12:56:43.632389 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:43 crc kubenswrapper[4922]: I0218 12:56:43.680885 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:44 crc kubenswrapper[4922]: I0218 12:56:44.298723 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:44 crc kubenswrapper[4922]: I0218 12:56:44.351612 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qrs5h"] Feb 18 12:56:45 crc kubenswrapper[4922]: I0218 12:56:45.974124 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:56:45 crc kubenswrapper[4922]: E0218 12:56:45.974795 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:56:46 crc kubenswrapper[4922]: I0218 12:56:46.265702 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qrs5h" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerName="registry-server" containerID="cri-o://66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b" gracePeriod=2 Feb 18 12:56:46 crc kubenswrapper[4922]: I0218 12:56:46.777520 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:46 crc kubenswrapper[4922]: I0218 12:56:46.955937 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ms4t\" (UniqueName: \"kubernetes.io/projected/94c125c3-32cd-4f33-b450-40c8cc7eacae-kube-api-access-9ms4t\") pod \"94c125c3-32cd-4f33-b450-40c8cc7eacae\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " Feb 18 12:56:46 crc kubenswrapper[4922]: I0218 12:56:46.956248 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-utilities\") pod \"94c125c3-32cd-4f33-b450-40c8cc7eacae\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " Feb 18 12:56:46 crc kubenswrapper[4922]: I0218 12:56:46.956346 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-catalog-content\") pod \"94c125c3-32cd-4f33-b450-40c8cc7eacae\" (UID: \"94c125c3-32cd-4f33-b450-40c8cc7eacae\") " Feb 18 12:56:46 crc kubenswrapper[4922]: I0218 12:56:46.962546 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c125c3-32cd-4f33-b450-40c8cc7eacae-kube-api-access-9ms4t" (OuterVolumeSpecName: "kube-api-access-9ms4t") pod "94c125c3-32cd-4f33-b450-40c8cc7eacae" (UID: "94c125c3-32cd-4f33-b450-40c8cc7eacae"). InnerVolumeSpecName "kube-api-access-9ms4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:56:46 crc kubenswrapper[4922]: I0218 12:56:46.964511 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-utilities" (OuterVolumeSpecName: "utilities") pod "94c125c3-32cd-4f33-b450-40c8cc7eacae" (UID: "94c125c3-32cd-4f33-b450-40c8cc7eacae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.030869 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94c125c3-32cd-4f33-b450-40c8cc7eacae" (UID: "94c125c3-32cd-4f33-b450-40c8cc7eacae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.059306 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ms4t\" (UniqueName: \"kubernetes.io/projected/94c125c3-32cd-4f33-b450-40c8cc7eacae-kube-api-access-9ms4t\") on node \"crc\" DevicePath \"\"" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.059350 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.059380 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94c125c3-32cd-4f33-b450-40c8cc7eacae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.275790 4922 generic.go:334] "Generic (PLEG): container finished" podID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerID="66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b" exitCode=0 Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.275888 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrs5h" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.275885 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrs5h" event={"ID":"94c125c3-32cd-4f33-b450-40c8cc7eacae","Type":"ContainerDied","Data":"66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b"} Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.276380 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrs5h" event={"ID":"94c125c3-32cd-4f33-b450-40c8cc7eacae","Type":"ContainerDied","Data":"7c62826b0f0334c8b0aa3d37b0faacd0446320f8a887d70e52e623149d73d432"} Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.276401 4922 scope.go:117] "RemoveContainer" containerID="66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.310355 4922 scope.go:117] "RemoveContainer" containerID="b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.317633 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qrs5h"] Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.324785 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qrs5h"] Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.347686 4922 scope.go:117] "RemoveContainer" containerID="08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.381296 4922 scope.go:117] "RemoveContainer" containerID="66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b" Feb 18 12:56:47 crc kubenswrapper[4922]: E0218 12:56:47.381779 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b\": container with ID starting with 66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b not found: ID does not exist" containerID="66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.381816 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b"} err="failed to get container status \"66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b\": rpc error: code = NotFound desc = could not find container \"66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b\": container with ID starting with 66d77ef75776ebe5a22cef4805ed16aa4fb6c96fc71e508d6f998ab7ef682a5b not found: ID does not exist" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.381845 4922 scope.go:117] "RemoveContainer" containerID="b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9" Feb 18 12:56:47 crc kubenswrapper[4922]: E0218 12:56:47.382123 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9\": container with ID starting with b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9 not found: ID does not exist" containerID="b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.382147 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9"} err="failed to get container status \"b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9\": rpc error: code = NotFound desc = could not find container \"b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9\": container with ID starting with b7f8c4fb90fcb2523ccad8afad30e2ec491363e6f807578a7eba0ee12f9fbee9 not found: ID does not exist" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.382166 4922 scope.go:117] "RemoveContainer" containerID="08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632" Feb 18 12:56:47 crc kubenswrapper[4922]: E0218 12:56:47.382491 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632\": container with ID starting with 08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632 not found: ID does not exist" containerID="08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632" Feb 18 12:56:47 crc kubenswrapper[4922]: I0218 12:56:47.382517 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632"} err="failed to get container status \"08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632\": rpc error: code = NotFound desc = could not find container \"08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632\": container with ID starting with 08a0375b93f0bbc164f903238995d36229517460a9b5e99b1dcfb85d4f3d5632 not found: ID does not exist" Feb 18 12:56:48 crc kubenswrapper[4922]: I0218 12:56:48.985877 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" path="/var/lib/kubelet/pods/94c125c3-32cd-4f33-b450-40c8cc7eacae/volumes" Feb 18 12:56:56 crc kubenswrapper[4922]: I0218 12:56:56.974729 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:56:56 crc kubenswrapper[4922]: E0218 12:56:56.975721 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:57:10 crc kubenswrapper[4922]: I0218 12:57:10.974294 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:57:10 crc kubenswrapper[4922]: E0218 12:57:10.974982 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:57:22 crc kubenswrapper[4922]: I0218 12:57:22.975077 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:57:22 crc kubenswrapper[4922]: E0218 12:57:22.976092 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:57:34 crc kubenswrapper[4922]: I0218 12:57:34.973184 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:57:34 crc kubenswrapper[4922]: E0218 12:57:34.973978 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 12:57:45 crc kubenswrapper[4922]: I0218 12:57:45.973348 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 12:57:46 crc kubenswrapper[4922]: I0218 12:57:46.811512 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"a3d670811b88086243e0421af6023cabe2c743133e3bc783d1903c4468a2a91f"} Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.815766 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxm5"] Feb 18 12:57:51 crc kubenswrapper[4922]: E0218 12:57:51.816688 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerName="extract-content" Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.816704 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerName="extract-content" Feb 18 12:57:51 crc kubenswrapper[4922]: E0218 12:57:51.816779 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerName="extract-utilities" Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.816788 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerName="extract-utilities" Feb 18 12:57:51 crc kubenswrapper[4922]: E0218 12:57:51.816801 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerName="registry-server" Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.816807 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerName="registry-server" Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.817003 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c125c3-32cd-4f33-b450-40c8cc7eacae" containerName="registry-server" Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.818572 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.833207 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxm5"] Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.917762 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-catalog-content\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.917904 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-utilities\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:51 crc kubenswrapper[4922]: I0218 12:57:51.917977 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f557n\" (UniqueName: \"kubernetes.io/projected/3c3f144e-fabf-4034-975e-46f1494ee4bf-kube-api-access-f557n\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.019606 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-utilities\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.019682 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f557n\" (UniqueName: \"kubernetes.io/projected/3c3f144e-fabf-4034-975e-46f1494ee4bf-kube-api-access-f557n\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.019821 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-catalog-content\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.020338 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-catalog-content\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.020351 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-utilities\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.039509 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f557n\" (UniqueName: \"kubernetes.io/projected/3c3f144e-fabf-4034-975e-46f1494ee4bf-kube-api-access-f557n\") pod \"redhat-marketplace-hvxm5\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.147791 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.594243 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxm5"] Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.874623 4922 generic.go:334] "Generic (PLEG): container finished" podID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerID="882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3" exitCode=0 Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.874693 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxm5" event={"ID":"3c3f144e-fabf-4034-975e-46f1494ee4bf","Type":"ContainerDied","Data":"882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3"} Feb 18 12:57:52 crc kubenswrapper[4922]: I0218 12:57:52.874992 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxm5" event={"ID":"3c3f144e-fabf-4034-975e-46f1494ee4bf","Type":"ContainerStarted","Data":"0d0fed85c22369371f11a697ae35c11710d53cc735f858229a22ee2477d97913"} Feb 18 12:57:53 crc kubenswrapper[4922]: I0218 12:57:53.885097 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxm5" event={"ID":"3c3f144e-fabf-4034-975e-46f1494ee4bf","Type":"ContainerStarted","Data":"9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7"} Feb 18 12:57:54 crc kubenswrapper[4922]: I0218 12:57:54.896109 4922 generic.go:334] "Generic (PLEG): container finished" podID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerID="9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7" exitCode=0 Feb 18 12:57:54 crc kubenswrapper[4922]: I0218 12:57:54.896174 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxm5" event={"ID":"3c3f144e-fabf-4034-975e-46f1494ee4bf","Type":"ContainerDied","Data":"9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7"} Feb 18 12:57:55 crc kubenswrapper[4922]: I0218 12:57:55.907277 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxm5" event={"ID":"3c3f144e-fabf-4034-975e-46f1494ee4bf","Type":"ContainerStarted","Data":"737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368"} Feb 18 12:57:55 crc kubenswrapper[4922]: I0218 12:57:55.935761 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hvxm5" podStartSLOduration=2.551131387 podStartE2EDuration="4.93574119s" podCreationTimestamp="2026-02-18 12:57:51 +0000 UTC" firstStartedPulling="2026-02-18 12:57:52.875959142 +0000 UTC m=+4874.603663222" lastFinishedPulling="2026-02-18 12:57:55.260568945 +0000 UTC m=+4876.988273025" observedRunningTime="2026-02-18 12:57:55.926621129 +0000 UTC m=+4877.654325209" watchObservedRunningTime="2026-02-18 12:57:55.93574119 +0000 UTC m=+4877.663445270" Feb 18 12:58:02 crc kubenswrapper[4922]: I0218 12:58:02.149007 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:58:02 crc kubenswrapper[4922]: I0218 12:58:02.150115 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:58:02 crc kubenswrapper[4922]: I0218 12:58:02.199181 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:58:03 crc kubenswrapper[4922]: I0218 12:58:03.036155 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:58:04 crc kubenswrapper[4922]: I0218 12:58:04.206112 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxm5"] Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.006149 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hvxm5" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerName="registry-server" containerID="cri-o://737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368" gracePeriod=2 Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.502912 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.596135 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-utilities\") pod \"3c3f144e-fabf-4034-975e-46f1494ee4bf\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.596253 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f557n\" (UniqueName: \"kubernetes.io/projected/3c3f144e-fabf-4034-975e-46f1494ee4bf-kube-api-access-f557n\") pod \"3c3f144e-fabf-4034-975e-46f1494ee4bf\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.596379 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-catalog-content\") pod \"3c3f144e-fabf-4034-975e-46f1494ee4bf\" (UID: \"3c3f144e-fabf-4034-975e-46f1494ee4bf\") " Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.597516 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-utilities" (OuterVolumeSpecName: "utilities") pod "3c3f144e-fabf-4034-975e-46f1494ee4bf" (UID: "3c3f144e-fabf-4034-975e-46f1494ee4bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.605491 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3f144e-fabf-4034-975e-46f1494ee4bf-kube-api-access-f557n" (OuterVolumeSpecName: "kube-api-access-f557n") pod "3c3f144e-fabf-4034-975e-46f1494ee4bf" (UID: "3c3f144e-fabf-4034-975e-46f1494ee4bf"). InnerVolumeSpecName "kube-api-access-f557n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.621263 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c3f144e-fabf-4034-975e-46f1494ee4bf" (UID: "3c3f144e-fabf-4034-975e-46f1494ee4bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.699064 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.699106 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f557n\" (UniqueName: \"kubernetes.io/projected/3c3f144e-fabf-4034-975e-46f1494ee4bf-kube-api-access-f557n\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:05 crc kubenswrapper[4922]: I0218 12:58:05.699114 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3f144e-fabf-4034-975e-46f1494ee4bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.019986 4922 generic.go:334] "Generic (PLEG): container finished" podID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerID="737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368" exitCode=0 Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.020056 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxm5" event={"ID":"3c3f144e-fabf-4034-975e-46f1494ee4bf","Type":"ContainerDied","Data":"737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368"} Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.020085 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxm5" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.020102 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxm5" event={"ID":"3c3f144e-fabf-4034-975e-46f1494ee4bf","Type":"ContainerDied","Data":"0d0fed85c22369371f11a697ae35c11710d53cc735f858229a22ee2477d97913"} Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.020121 4922 scope.go:117] "RemoveContainer" containerID="737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.061421 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxm5"] Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.062487 4922 scope.go:117] "RemoveContainer" containerID="9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.072906 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxm5"] Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.372625 4922 scope.go:117] "RemoveContainer" containerID="882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.429353 4922 scope.go:117] "RemoveContainer" containerID="737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368" Feb 18 12:58:06 crc kubenswrapper[4922]: E0218 12:58:06.429841 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368\": container with ID starting with 737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368 not found: ID does not exist" containerID="737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.429959 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368"} err="failed to get container status \"737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368\": rpc error: code = NotFound desc = could not find container \"737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368\": container with ID starting with 737a38605f39c3e1061fe189a1db07f3846ad31c74ef2249f1a43f7e2f414368 not found: ID does not exist" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.430071 4922 scope.go:117] "RemoveContainer" containerID="9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7" Feb 18 12:58:06 crc kubenswrapper[4922]: E0218 12:58:06.430833 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7\": container with ID starting with 9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7 not found: ID does not exist" containerID="9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.430924 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7"} err="failed to get container status \"9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7\": rpc error: code = NotFound desc = could not find container \"9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7\": container with ID starting with 9865bb8d3add17f03ed0cb12b7bff7d34d2081926faa532b4686b36cd82edfb7 not found: ID does not exist" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.430999 4922 scope.go:117] "RemoveContainer" containerID="882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3" Feb 18 12:58:06 crc kubenswrapper[4922]: E0218 12:58:06.431402 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3\": container with ID starting with 882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3 not found: ID does not exist" containerID="882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.431435 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3"} err="failed to get container status \"882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3\": rpc error: code = NotFound desc = could not find container \"882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3\": container with ID starting with 882d0494ca3cd1b991318e5181f3fda34cfb362f6738b6b9c57fca6cd37790d3 not found: ID does not exist" Feb 18 12:58:06 crc kubenswrapper[4922]: I0218 12:58:06.990571 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" path="/var/lib/kubelet/pods/3c3f144e-fabf-4034-975e-46f1494ee4bf/volumes" Feb 18 12:58:19 crc kubenswrapper[4922]: I0218 12:58:19.137215 4922 generic.go:334] "Generic (PLEG): container finished" podID="4525818f-9e1d-48a0-8ec1-1a22a0841dd4" containerID="e3b2b8b928d4d252bf46e4bb853a742c089adf478f27339f292b3bd6347dcdc0" exitCode=1 Feb 18 12:58:19 crc kubenswrapper[4922]: I0218 12:58:19.137324 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4525818f-9e1d-48a0-8ec1-1a22a0841dd4","Type":"ContainerDied","Data":"e3b2b8b928d4d252bf46e4bb853a742c089adf478f27339f292b3bd6347dcdc0"} Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.485656 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.583744 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-temporary\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.583830 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-workdir\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.583862 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.583898 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ssh-key\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.583978 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-config-data\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.584051 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config-secret\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.584424 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.584693 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-config-data" (OuterVolumeSpecName: "config-data") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.584781 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ca-certs\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.584982 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqctm\" (UniqueName: \"kubernetes.io/projected/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-kube-api-access-vqctm\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.585048 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\" (UID: \"4525818f-9e1d-48a0-8ec1-1a22a0841dd4\") " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.585749 4922 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.585768 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.589711 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.590188 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-kube-api-access-vqctm" (OuterVolumeSpecName: "kube-api-access-vqctm") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "kube-api-access-vqctm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.611320 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.612332 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.613979 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.641344 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.662425 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "4525818f-9e1d-48a0-8ec1-1a22a0841dd4" (UID: "4525818f-9e1d-48a0-8ec1-1a22a0841dd4"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.687078 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.687110 4922 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.687120 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqctm\" (UniqueName: \"kubernetes.io/projected/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-kube-api-access-vqctm\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.687151 4922 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.687162 4922 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.687172 4922 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.687180 4922 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4525818f-9e1d-48a0-8ec1-1a22a0841dd4-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.708550 4922 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 18 12:58:20 crc kubenswrapper[4922]: I0218 12:58:20.789230 4922 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:58:21 crc kubenswrapper[4922]: I0218 12:58:21.157209 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4525818f-9e1d-48a0-8ec1-1a22a0841dd4","Type":"ContainerDied","Data":"492694f74401bc119697f1caa4fa178df1922c217659e262bc75d36660dd58d8"} Feb 18 12:58:21 crc kubenswrapper[4922]: I0218 12:58:21.157262 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="492694f74401bc119697f1caa4fa178df1922c217659e262bc75d36660dd58d8" Feb 18 12:58:21 crc kubenswrapper[4922]: I0218 12:58:21.157278 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.224905 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 12:58:28 crc kubenswrapper[4922]: E0218 12:58:28.226590 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerName="extract-utilities" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.226611 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerName="extract-utilities" Feb 18 12:58:28 crc kubenswrapper[4922]: E0218 12:58:28.226631 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerName="extract-content" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.226639 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerName="extract-content" Feb 18 12:58:28 crc kubenswrapper[4922]: E0218 12:58:28.226655 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4525818f-9e1d-48a0-8ec1-1a22a0841dd4" containerName="tempest-tests-tempest-tests-runner" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.226662 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="4525818f-9e1d-48a0-8ec1-1a22a0841dd4" containerName="tempest-tests-tempest-tests-runner" Feb 18 12:58:28 crc kubenswrapper[4922]: E0218 12:58:28.226688 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerName="registry-server" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.226694 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerName="registry-server" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.226964 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="4525818f-9e1d-48a0-8ec1-1a22a0841dd4" containerName="tempest-tests-tempest-tests-runner" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.227016 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3f144e-fabf-4034-975e-46f1494ee4bf" containerName="registry-server" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.228049 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.231073 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-thfnr" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.244397 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.340728 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkds\" (UniqueName: \"kubernetes.io/projected/fc2833dd-ab51-414c-9ce3-ed8078989ea5-kube-api-access-stkds\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fc2833dd-ab51-414c-9ce3-ed8078989ea5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.341042 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fc2833dd-ab51-414c-9ce3-ed8078989ea5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.443401 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkds\" (UniqueName: \"kubernetes.io/projected/fc2833dd-ab51-414c-9ce3-ed8078989ea5-kube-api-access-stkds\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fc2833dd-ab51-414c-9ce3-ed8078989ea5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.443557 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fc2833dd-ab51-414c-9ce3-ed8078989ea5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.444028 4922 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fc2833dd-ab51-414c-9ce3-ed8078989ea5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.465259 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkds\" (UniqueName: \"kubernetes.io/projected/fc2833dd-ab51-414c-9ce3-ed8078989ea5-kube-api-access-stkds\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fc2833dd-ab51-414c-9ce3-ed8078989ea5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.471042 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fc2833dd-ab51-414c-9ce3-ed8078989ea5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:28 crc kubenswrapper[4922]: I0218 12:58:28.551993 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:58:29 crc kubenswrapper[4922]: I0218 12:58:29.033052 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 12:58:29 crc kubenswrapper[4922]: I0218 12:58:29.220803 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fc2833dd-ab51-414c-9ce3-ed8078989ea5","Type":"ContainerStarted","Data":"5d8033e919b26ce15b779e896791d985ec6fd7ba74dfed5e364ccbafba85687d"} Feb 18 12:58:30 crc kubenswrapper[4922]: I0218 12:58:30.229836 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fc2833dd-ab51-414c-9ce3-ed8078989ea5","Type":"ContainerStarted","Data":"58ee746ddfe99d427062a01a14a31e40a16f287ebdc8eab91fc41fab8d27f975"} Feb 18 12:58:30 crc kubenswrapper[4922]: I0218 12:58:30.244774 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.333675906 podStartE2EDuration="2.244756278s" podCreationTimestamp="2026-02-18 12:58:28 +0000 UTC" firstStartedPulling="2026-02-18 12:58:29.024440872 +0000 UTC m=+4910.752144952" lastFinishedPulling="2026-02-18 12:58:29.935521244 +0000 UTC m=+4911.663225324" observedRunningTime="2026-02-18 12:58:30.240886539 +0000 UTC m=+4911.968590619" watchObservedRunningTime="2026-02-18 12:58:30.244756278 +0000 UTC m=+4911.972460358" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.591783 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nxktd"] Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.595535 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.603265 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxktd"] Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.727057 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttfnt\" (UniqueName: \"kubernetes.io/projected/37694696-8e79-4f78-be48-8d4bbdeef478-kube-api-access-ttfnt\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.727185 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-utilities\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.727211 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-catalog-content\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.829415 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttfnt\" (UniqueName: \"kubernetes.io/projected/37694696-8e79-4f78-be48-8d4bbdeef478-kube-api-access-ttfnt\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.829580 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-utilities\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.829619 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-catalog-content\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.830169 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-catalog-content\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.830479 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-utilities\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.849749 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttfnt\" (UniqueName: \"kubernetes.io/projected/37694696-8e79-4f78-be48-8d4bbdeef478-kube-api-access-ttfnt\") pod \"community-operators-nxktd\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:53 crc kubenswrapper[4922]: I0218 12:58:53.924643 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:58:54 crc kubenswrapper[4922]: W0218 12:58:54.426446 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37694696_8e79_4f78_be48_8d4bbdeef478.slice/crio-60d43046c25b17aef620e372e4fff97acd4e4d4aef2c1b4922e6e429cb743ec5 WatchSource:0}: Error finding container 60d43046c25b17aef620e372e4fff97acd4e4d4aef2c1b4922e6e429cb743ec5: Status 404 returned error can't find the container with id 60d43046c25b17aef620e372e4fff97acd4e4d4aef2c1b4922e6e429cb743ec5 Feb 18 12:58:54 crc kubenswrapper[4922]: I0218 12:58:54.427903 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxktd"] Feb 18 12:58:54 crc kubenswrapper[4922]: I0218 12:58:54.438081 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxktd" event={"ID":"37694696-8e79-4f78-be48-8d4bbdeef478","Type":"ContainerStarted","Data":"60d43046c25b17aef620e372e4fff97acd4e4d4aef2c1b4922e6e429cb743ec5"} Feb 18 12:58:55 crc kubenswrapper[4922]: I0218 12:58:55.447961 4922 generic.go:334] "Generic (PLEG): container finished" podID="37694696-8e79-4f78-be48-8d4bbdeef478" containerID="0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d" exitCode=0 Feb 18 12:58:55 crc kubenswrapper[4922]: I0218 12:58:55.448285 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxktd" event={"ID":"37694696-8e79-4f78-be48-8d4bbdeef478","Type":"ContainerDied","Data":"0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d"} Feb 18 12:58:55 crc kubenswrapper[4922]: I0218 12:58:55.451008 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:58:56 crc kubenswrapper[4922]: I0218 12:58:56.458860 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxktd" event={"ID":"37694696-8e79-4f78-be48-8d4bbdeef478","Type":"ContainerStarted","Data":"8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349"} Feb 18 12:58:57 crc kubenswrapper[4922]: I0218 12:58:57.469727 4922 generic.go:334] "Generic (PLEG): container finished" podID="37694696-8e79-4f78-be48-8d4bbdeef478" containerID="8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349" exitCode=0 Feb 18 12:58:57 crc kubenswrapper[4922]: I0218 12:58:57.469818 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxktd" event={"ID":"37694696-8e79-4f78-be48-8d4bbdeef478","Type":"ContainerDied","Data":"8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349"} Feb 18 12:58:58 crc kubenswrapper[4922]: I0218 12:58:58.481643 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxktd" event={"ID":"37694696-8e79-4f78-be48-8d4bbdeef478","Type":"ContainerStarted","Data":"c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9"} Feb 18 12:58:58 crc kubenswrapper[4922]: I0218 12:58:58.501819 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nxktd" podStartSLOduration=3.098737372 podStartE2EDuration="5.501800002s" podCreationTimestamp="2026-02-18 12:58:53 +0000 UTC" firstStartedPulling="2026-02-18 12:58:55.450797196 +0000 UTC m=+4937.178501276" lastFinishedPulling="2026-02-18 12:58:57.853859826 +0000 UTC m=+4939.581563906" observedRunningTime="2026-02-18 12:58:58.495959114 +0000 UTC m=+4940.223663194" watchObservedRunningTime="2026-02-18 12:58:58.501800002 +0000 UTC m=+4940.229504082" Feb 18 12:59:03 crc kubenswrapper[4922]: I0218 12:59:03.925340 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:59:03 crc kubenswrapper[4922]: I0218 12:59:03.925930 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:59:03 crc kubenswrapper[4922]: I0218 12:59:03.971347 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:59:04 crc kubenswrapper[4922]: I0218 12:59:04.575165 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:59:04 crc kubenswrapper[4922]: I0218 12:59:04.621621 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxktd"] Feb 18 12:59:06 crc kubenswrapper[4922]: I0218 12:59:06.546641 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nxktd" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" containerName="registry-server" containerID="cri-o://c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9" gracePeriod=2 Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.001239 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bf65p/must-gather-pnxz8"] Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.003534 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bf65p/must-gather-pnxz8"] Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.003622 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.015805 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bf65p"/"kube-root-ca.crt" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.016051 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bf65p"/"openshift-service-ca.crt" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.018332 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bf65p"/"default-dockercfg-xvq9t" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.119208 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.124808 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-must-gather-output\") pod \"must-gather-pnxz8\" (UID: \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\") " pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.124877 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkczn\" (UniqueName: \"kubernetes.io/projected/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-kube-api-access-hkczn\") pod \"must-gather-pnxz8\" (UID: \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\") " pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.226700 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttfnt\" (UniqueName: \"kubernetes.io/projected/37694696-8e79-4f78-be48-8d4bbdeef478-kube-api-access-ttfnt\") pod \"37694696-8e79-4f78-be48-8d4bbdeef478\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.226961 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-catalog-content\") pod \"37694696-8e79-4f78-be48-8d4bbdeef478\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.227023 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-utilities\") pod \"37694696-8e79-4f78-be48-8d4bbdeef478\" (UID: \"37694696-8e79-4f78-be48-8d4bbdeef478\") " Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.227458 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-must-gather-output\") pod \"must-gather-pnxz8\" (UID: \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\") " pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.227518 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkczn\" (UniqueName: \"kubernetes.io/projected/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-kube-api-access-hkczn\") pod \"must-gather-pnxz8\" (UID: \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\") " pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.228045 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-utilities" (OuterVolumeSpecName: "utilities") pod "37694696-8e79-4f78-be48-8d4bbdeef478" (UID: "37694696-8e79-4f78-be48-8d4bbdeef478"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.228353 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-must-gather-output\") pod \"must-gather-pnxz8\" (UID: \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\") " pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.234516 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37694696-8e79-4f78-be48-8d4bbdeef478-kube-api-access-ttfnt" (OuterVolumeSpecName: "kube-api-access-ttfnt") pod "37694696-8e79-4f78-be48-8d4bbdeef478" (UID: "37694696-8e79-4f78-be48-8d4bbdeef478"). InnerVolumeSpecName "kube-api-access-ttfnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.244186 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkczn\" (UniqueName: \"kubernetes.io/projected/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-kube-api-access-hkczn\") pod \"must-gather-pnxz8\" (UID: \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\") " pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.284946 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37694696-8e79-4f78-be48-8d4bbdeef478" (UID: "37694696-8e79-4f78-be48-8d4bbdeef478"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.329972 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.330005 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttfnt\" (UniqueName: \"kubernetes.io/projected/37694696-8e79-4f78-be48-8d4bbdeef478-kube-api-access-ttfnt\") on node \"crc\" DevicePath \"\"" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.330016 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37694696-8e79-4f78-be48-8d4bbdeef478-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.414866 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.558333 4922 generic.go:334] "Generic (PLEG): container finished" podID="37694696-8e79-4f78-be48-8d4bbdeef478" containerID="c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9" exitCode=0 Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.558408 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxktd" event={"ID":"37694696-8e79-4f78-be48-8d4bbdeef478","Type":"ContainerDied","Data":"c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9"} Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.558438 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxktd" event={"ID":"37694696-8e79-4f78-be48-8d4bbdeef478","Type":"ContainerDied","Data":"60d43046c25b17aef620e372e4fff97acd4e4d4aef2c1b4922e6e429cb743ec5"} Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.558472 4922 scope.go:117] "RemoveContainer" containerID="c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.558648 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxktd" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.597966 4922 scope.go:117] "RemoveContainer" containerID="8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.609751 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxktd"] Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.618006 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nxktd"] Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.632014 4922 scope.go:117] "RemoveContainer" containerID="0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.667781 4922 scope.go:117] "RemoveContainer" containerID="c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9" Feb 18 12:59:07 crc kubenswrapper[4922]: E0218 12:59:07.668539 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9\": container with ID starting with c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9 not found: ID does not exist" containerID="c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.668567 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9"} err="failed to get container status \"c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9\": rpc error: code = NotFound desc = could not find container \"c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9\": container with ID starting with c5daea675634727bd83c74c0073578c4223e10a3aaa230ae869aeb8c315362d9 not found: ID does not exist" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.668589 4922 scope.go:117] "RemoveContainer" containerID="8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349" Feb 18 12:59:07 crc kubenswrapper[4922]: E0218 12:59:07.669110 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349\": container with ID starting with 8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349 not found: ID does not exist" containerID="8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.669134 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349"} err="failed to get container status \"8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349\": rpc error: code = NotFound desc = could not find container \"8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349\": container with ID starting with 8efcea36a093ebc966af2378eeaab555a57033dc35aa8325ecff1d49f8467349 not found: ID does not exist" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.669206 4922 scope.go:117] "RemoveContainer" containerID="0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d" Feb 18 12:59:07 crc kubenswrapper[4922]: E0218 12:59:07.669602 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d\": container with ID starting with 0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d not found: ID does not exist" containerID="0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.669620 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d"} err="failed to get container status \"0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d\": rpc error: code = NotFound desc = could not find container \"0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d\": container with ID starting with 0c8448c56970a5721edade6dfb6a144edc5f360fa40fa69b6e85fe752f611c9d not found: ID does not exist" Feb 18 12:59:07 crc kubenswrapper[4922]: I0218 12:59:07.874761 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bf65p/must-gather-pnxz8"] Feb 18 12:59:08 crc kubenswrapper[4922]: I0218 12:59:08.567676 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/must-gather-pnxz8" event={"ID":"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44","Type":"ContainerStarted","Data":"44ad0ca2b1f4d58d5e9c1bdc531a19d9bcb3626c9f2ad8ca2e2348f7eea332c7"} Feb 18 12:59:08 crc kubenswrapper[4922]: I0218 12:59:08.985453 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" path="/var/lib/kubelet/pods/37694696-8e79-4f78-be48-8d4bbdeef478/volumes" Feb 18 12:59:16 crc kubenswrapper[4922]: I0218 12:59:16.639950 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/must-gather-pnxz8" event={"ID":"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44","Type":"ContainerStarted","Data":"87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0"} Feb 18 12:59:16 crc kubenswrapper[4922]: I0218 12:59:16.640532 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/must-gather-pnxz8" event={"ID":"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44","Type":"ContainerStarted","Data":"1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c"} Feb 18 12:59:16 crc kubenswrapper[4922]: I0218 12:59:16.656910 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bf65p/must-gather-pnxz8" podStartSLOduration=2.999636208 podStartE2EDuration="10.656887539s" podCreationTimestamp="2026-02-18 12:59:06 +0000 UTC" firstStartedPulling="2026-02-18 12:59:08.173455247 +0000 UTC m=+4949.901159327" lastFinishedPulling="2026-02-18 12:59:15.830706578 +0000 UTC m=+4957.558410658" observedRunningTime="2026-02-18 12:59:16.653884173 +0000 UTC m=+4958.381588263" watchObservedRunningTime="2026-02-18 12:59:16.656887539 +0000 UTC m=+4958.384591619" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.781963 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bf65p/crc-debug-zdn7g"] Feb 18 12:59:19 crc kubenswrapper[4922]: E0218 12:59:19.783316 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" containerName="registry-server" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.783332 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" containerName="registry-server" Feb 18 12:59:19 crc kubenswrapper[4922]: E0218 12:59:19.783353 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" containerName="extract-utilities" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.783375 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" containerName="extract-utilities" Feb 18 12:59:19 crc kubenswrapper[4922]: E0218 12:59:19.783391 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" containerName="extract-content" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.783397 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" containerName="extract-content" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.783583 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="37694696-8e79-4f78-be48-8d4bbdeef478" containerName="registry-server" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.784253 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.879850 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ee5a976-94bf-472d-af8f-09df6835587b-host\") pod \"crc-debug-zdn7g\" (UID: \"7ee5a976-94bf-472d-af8f-09df6835587b\") " pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.879906 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zcrn\" (UniqueName: \"kubernetes.io/projected/7ee5a976-94bf-472d-af8f-09df6835587b-kube-api-access-4zcrn\") pod \"crc-debug-zdn7g\" (UID: \"7ee5a976-94bf-472d-af8f-09df6835587b\") " pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.982602 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ee5a976-94bf-472d-af8f-09df6835587b-host\") pod \"crc-debug-zdn7g\" (UID: \"7ee5a976-94bf-472d-af8f-09df6835587b\") " pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.982696 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zcrn\" (UniqueName: \"kubernetes.io/projected/7ee5a976-94bf-472d-af8f-09df6835587b-kube-api-access-4zcrn\") pod \"crc-debug-zdn7g\" (UID: \"7ee5a976-94bf-472d-af8f-09df6835587b\") " pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 12:59:19 crc kubenswrapper[4922]: I0218 12:59:19.982816 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ee5a976-94bf-472d-af8f-09df6835587b-host\") pod \"crc-debug-zdn7g\" (UID: \"7ee5a976-94bf-472d-af8f-09df6835587b\") " pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 12:59:20 crc kubenswrapper[4922]: I0218 12:59:20.006285 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zcrn\" (UniqueName: \"kubernetes.io/projected/7ee5a976-94bf-472d-af8f-09df6835587b-kube-api-access-4zcrn\") pod \"crc-debug-zdn7g\" (UID: \"7ee5a976-94bf-472d-af8f-09df6835587b\") " pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 12:59:20 crc kubenswrapper[4922]: I0218 12:59:20.101735 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 12:59:20 crc kubenswrapper[4922]: W0218 12:59:20.145619 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ee5a976_94bf_472d_af8f_09df6835587b.slice/crio-5aa8a6bf58c8c23a03dba7b3217875b34ffb49c2470c51d0c70e37165b156f73 WatchSource:0}: Error finding container 5aa8a6bf58c8c23a03dba7b3217875b34ffb49c2470c51d0c70e37165b156f73: Status 404 returned error can't find the container with id 5aa8a6bf58c8c23a03dba7b3217875b34ffb49c2470c51d0c70e37165b156f73 Feb 18 12:59:20 crc kubenswrapper[4922]: I0218 12:59:20.684493 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/crc-debug-zdn7g" event={"ID":"7ee5a976-94bf-472d-af8f-09df6835587b","Type":"ContainerStarted","Data":"5aa8a6bf58c8c23a03dba7b3217875b34ffb49c2470c51d0c70e37165b156f73"} Feb 18 12:59:31 crc kubenswrapper[4922]: I0218 12:59:31.799545 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/crc-debug-zdn7g" event={"ID":"7ee5a976-94bf-472d-af8f-09df6835587b","Type":"ContainerStarted","Data":"807a994ec59a7105a4222c75e8c367b51fc34fa12867e26594ab79a75f9ffd06"} Feb 18 12:59:31 crc kubenswrapper[4922]: I0218 12:59:31.823630 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bf65p/crc-debug-zdn7g" podStartSLOduration=2.104918267 podStartE2EDuration="12.823610368s" podCreationTimestamp="2026-02-18 12:59:19 +0000 UTC" firstStartedPulling="2026-02-18 12:59:20.148395085 +0000 UTC m=+4961.876099165" lastFinishedPulling="2026-02-18 12:59:30.867087176 +0000 UTC m=+4972.594791266" observedRunningTime="2026-02-18 12:59:31.81459248 +0000 UTC m=+4973.542296560" watchObservedRunningTime="2026-02-18 12:59:31.823610368 +0000 UTC m=+4973.551314448" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.183439 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs"] Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.185757 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.189720 4922 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.191307 4922 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.210831 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs"] Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.322946 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dfb0ade-7d72-46ba-b438-04dd2465b963-config-volume\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.323260 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dfb0ade-7d72-46ba-b438-04dd2465b963-secret-volume\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.323572 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8926d\" (UniqueName: \"kubernetes.io/projected/9dfb0ade-7d72-46ba-b438-04dd2465b963-kube-api-access-8926d\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.426269 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8926d\" (UniqueName: \"kubernetes.io/projected/9dfb0ade-7d72-46ba-b438-04dd2465b963-kube-api-access-8926d\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.426751 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dfb0ade-7d72-46ba-b438-04dd2465b963-config-volume\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.426927 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dfb0ade-7d72-46ba-b438-04dd2465b963-secret-volume\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.427772 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dfb0ade-7d72-46ba-b438-04dd2465b963-config-volume\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.434320 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dfb0ade-7d72-46ba-b438-04dd2465b963-secret-volume\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.469411 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8926d\" (UniqueName: \"kubernetes.io/projected/9dfb0ade-7d72-46ba-b438-04dd2465b963-kube-api-access-8926d\") pod \"collect-profiles-29523660-khccs\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:00 crc kubenswrapper[4922]: I0218 13:00:00.512617 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:01 crc kubenswrapper[4922]: I0218 13:00:01.061103 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs"] Feb 18 13:00:01 crc kubenswrapper[4922]: W0218 13:00:01.064613 4922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dfb0ade_7d72_46ba_b438_04dd2465b963.slice/crio-030383aab789df0d99eca0e575230a460d70ccefa58d04b4d79bd4d9df818265 WatchSource:0}: Error finding container 030383aab789df0d99eca0e575230a460d70ccefa58d04b4d79bd4d9df818265: Status 404 returned error can't find the container with id 030383aab789df0d99eca0e575230a460d70ccefa58d04b4d79bd4d9df818265 Feb 18 13:00:01 crc kubenswrapper[4922]: I0218 13:00:01.151373 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" event={"ID":"9dfb0ade-7d72-46ba-b438-04dd2465b963","Type":"ContainerStarted","Data":"030383aab789df0d99eca0e575230a460d70ccefa58d04b4d79bd4d9df818265"} Feb 18 13:00:02 crc kubenswrapper[4922]: I0218 13:00:02.162635 4922 generic.go:334] "Generic (PLEG): container finished" podID="9dfb0ade-7d72-46ba-b438-04dd2465b963" containerID="4a6a3619ea4131a004e539864c38f6aff500cbd29e608f03198bcf5f40b0a6e5" exitCode=0 Feb 18 13:00:02 crc kubenswrapper[4922]: I0218 13:00:02.162943 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" event={"ID":"9dfb0ade-7d72-46ba-b438-04dd2465b963","Type":"ContainerDied","Data":"4a6a3619ea4131a004e539864c38f6aff500cbd29e608f03198bcf5f40b0a6e5"} Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.109937 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.194868 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" event={"ID":"9dfb0ade-7d72-46ba-b438-04dd2465b963","Type":"ContainerDied","Data":"030383aab789df0d99eca0e575230a460d70ccefa58d04b4d79bd4d9df818265"} Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.195200 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="030383aab789df0d99eca0e575230a460d70ccefa58d04b4d79bd4d9df818265" Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.195079 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-khccs" Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.274423 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dfb0ade-7d72-46ba-b438-04dd2465b963-secret-volume\") pod \"9dfb0ade-7d72-46ba-b438-04dd2465b963\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.274483 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8926d\" (UniqueName: \"kubernetes.io/projected/9dfb0ade-7d72-46ba-b438-04dd2465b963-kube-api-access-8926d\") pod \"9dfb0ade-7d72-46ba-b438-04dd2465b963\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.274622 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dfb0ade-7d72-46ba-b438-04dd2465b963-config-volume\") pod \"9dfb0ade-7d72-46ba-b438-04dd2465b963\" (UID: \"9dfb0ade-7d72-46ba-b438-04dd2465b963\") " Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.277356 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dfb0ade-7d72-46ba-b438-04dd2465b963-config-volume" (OuterVolumeSpecName: "config-volume") pod "9dfb0ade-7d72-46ba-b438-04dd2465b963" (UID: "9dfb0ade-7d72-46ba-b438-04dd2465b963"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.293483 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dfb0ade-7d72-46ba-b438-04dd2465b963-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9dfb0ade-7d72-46ba-b438-04dd2465b963" (UID: "9dfb0ade-7d72-46ba-b438-04dd2465b963"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.295233 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfb0ade-7d72-46ba-b438-04dd2465b963-kube-api-access-8926d" (OuterVolumeSpecName: "kube-api-access-8926d") pod "9dfb0ade-7d72-46ba-b438-04dd2465b963" (UID: "9dfb0ade-7d72-46ba-b438-04dd2465b963"). InnerVolumeSpecName "kube-api-access-8926d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.378012 4922 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9dfb0ade-7d72-46ba-b438-04dd2465b963-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.378068 4922 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9dfb0ade-7d72-46ba-b438-04dd2465b963-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:04 crc kubenswrapper[4922]: I0218 13:00:04.378114 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8926d\" (UniqueName: \"kubernetes.io/projected/9dfb0ade-7d72-46ba-b438-04dd2465b963-kube-api-access-8926d\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:05 crc kubenswrapper[4922]: I0218 13:00:05.193046 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4"] Feb 18 13:00:05 crc kubenswrapper[4922]: I0218 13:00:05.212033 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-grpz4"] Feb 18 13:00:06 crc kubenswrapper[4922]: I0218 13:00:06.992000 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fefb3a87-d203-4ac1-b63d-61c582015132" path="/var/lib/kubelet/pods/fefb3a87-d203-4ac1-b63d-61c582015132/volumes" Feb 18 13:00:09 crc kubenswrapper[4922]: I0218 13:00:09.807534 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:00:09 crc kubenswrapper[4922]: I0218 13:00:09.807891 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:00:15 crc kubenswrapper[4922]: I0218 13:00:15.751789 4922 scope.go:117] "RemoveContainer" containerID="5b401b8ee4f7943af0a7b7807634c73c9cc5371f7bb8ea18f378db7de3390a99" Feb 18 13:00:27 crc kubenswrapper[4922]: I0218 13:00:27.391199 4922 generic.go:334] "Generic (PLEG): container finished" podID="7ee5a976-94bf-472d-af8f-09df6835587b" containerID="807a994ec59a7105a4222c75e8c367b51fc34fa12867e26594ab79a75f9ffd06" exitCode=0 Feb 18 13:00:27 crc kubenswrapper[4922]: I0218 13:00:27.391314 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/crc-debug-zdn7g" event={"ID":"7ee5a976-94bf-472d-af8f-09df6835587b","Type":"ContainerDied","Data":"807a994ec59a7105a4222c75e8c367b51fc34fa12867e26594ab79a75f9ffd06"} Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.500694 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.541132 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bf65p/crc-debug-zdn7g"] Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.551065 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bf65p/crc-debug-zdn7g"] Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.640016 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ee5a976-94bf-472d-af8f-09df6835587b-host\") pod \"7ee5a976-94bf-472d-af8f-09df6835587b\" (UID: \"7ee5a976-94bf-472d-af8f-09df6835587b\") " Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.640174 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zcrn\" (UniqueName: \"kubernetes.io/projected/7ee5a976-94bf-472d-af8f-09df6835587b-kube-api-access-4zcrn\") pod \"7ee5a976-94bf-472d-af8f-09df6835587b\" (UID: \"7ee5a976-94bf-472d-af8f-09df6835587b\") " Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.640351 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ee5a976-94bf-472d-af8f-09df6835587b-host" (OuterVolumeSpecName: "host") pod "7ee5a976-94bf-472d-af8f-09df6835587b" (UID: "7ee5a976-94bf-472d-af8f-09df6835587b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.640698 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7ee5a976-94bf-472d-af8f-09df6835587b-host\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.655229 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee5a976-94bf-472d-af8f-09df6835587b-kube-api-access-4zcrn" (OuterVolumeSpecName: "kube-api-access-4zcrn") pod "7ee5a976-94bf-472d-af8f-09df6835587b" (UID: "7ee5a976-94bf-472d-af8f-09df6835587b"). InnerVolumeSpecName "kube-api-access-4zcrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.742600 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zcrn\" (UniqueName: \"kubernetes.io/projected/7ee5a976-94bf-472d-af8f-09df6835587b-kube-api-access-4zcrn\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:28 crc kubenswrapper[4922]: I0218 13:00:28.996328 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee5a976-94bf-472d-af8f-09df6835587b" path="/var/lib/kubelet/pods/7ee5a976-94bf-472d-af8f-09df6835587b/volumes" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.408830 4922 scope.go:117] "RemoveContainer" containerID="807a994ec59a7105a4222c75e8c367b51fc34fa12867e26594ab79a75f9ffd06" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.408879 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-zdn7g" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.727521 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bf65p/crc-debug-t2cmf"] Feb 18 13:00:29 crc kubenswrapper[4922]: E0218 13:00:29.727974 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfb0ade-7d72-46ba-b438-04dd2465b963" containerName="collect-profiles" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.727990 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfb0ade-7d72-46ba-b438-04dd2465b963" containerName="collect-profiles" Feb 18 13:00:29 crc kubenswrapper[4922]: E0218 13:00:29.728034 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee5a976-94bf-472d-af8f-09df6835587b" containerName="container-00" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.728042 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee5a976-94bf-472d-af8f-09df6835587b" containerName="container-00" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.728259 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfb0ade-7d72-46ba-b438-04dd2465b963" containerName="collect-profiles" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.728282 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee5a976-94bf-472d-af8f-09df6835587b" containerName="container-00" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.730094 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.864427 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77a43352-36b4-4006-b6ca-489c923eaf63-host\") pod \"crc-debug-t2cmf\" (UID: \"77a43352-36b4-4006-b6ca-489c923eaf63\") " pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.864652 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zkpx\" (UniqueName: \"kubernetes.io/projected/77a43352-36b4-4006-b6ca-489c923eaf63-kube-api-access-2zkpx\") pod \"crc-debug-t2cmf\" (UID: \"77a43352-36b4-4006-b6ca-489c923eaf63\") " pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.966560 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77a43352-36b4-4006-b6ca-489c923eaf63-host\") pod \"crc-debug-t2cmf\" (UID: \"77a43352-36b4-4006-b6ca-489c923eaf63\") " pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.966705 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zkpx\" (UniqueName: \"kubernetes.io/projected/77a43352-36b4-4006-b6ca-489c923eaf63-kube-api-access-2zkpx\") pod \"crc-debug-t2cmf\" (UID: \"77a43352-36b4-4006-b6ca-489c923eaf63\") " pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.967177 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77a43352-36b4-4006-b6ca-489c923eaf63-host\") pod \"crc-debug-t2cmf\" (UID: \"77a43352-36b4-4006-b6ca-489c923eaf63\") " pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:29 crc kubenswrapper[4922]: I0218 13:00:29.989137 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zkpx\" (UniqueName: \"kubernetes.io/projected/77a43352-36b4-4006-b6ca-489c923eaf63-kube-api-access-2zkpx\") pod \"crc-debug-t2cmf\" (UID: \"77a43352-36b4-4006-b6ca-489c923eaf63\") " pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:30 crc kubenswrapper[4922]: I0218 13:00:30.047009 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:30 crc kubenswrapper[4922]: I0218 13:00:30.417689 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/crc-debug-t2cmf" event={"ID":"77a43352-36b4-4006-b6ca-489c923eaf63","Type":"ContainerStarted","Data":"dabdb87ca65d6f8f0ea66c048ef4f9fa15f0c9dea5a0b549634e5eeaf236ae25"} Feb 18 13:00:30 crc kubenswrapper[4922]: I0218 13:00:30.418200 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/crc-debug-t2cmf" event={"ID":"77a43352-36b4-4006-b6ca-489c923eaf63","Type":"ContainerStarted","Data":"29f3c28826d06281ee090cf35680da696091c0a166a7523f0f211375d3bfa443"} Feb 18 13:00:30 crc kubenswrapper[4922]: I0218 13:00:30.437250 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bf65p/crc-debug-t2cmf" podStartSLOduration=1.4372315979999999 podStartE2EDuration="1.437231598s" podCreationTimestamp="2026-02-18 13:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 13:00:30.430863856 +0000 UTC m=+5032.158567936" watchObservedRunningTime="2026-02-18 13:00:30.437231598 +0000 UTC m=+5032.164935678" Feb 18 13:00:31 crc kubenswrapper[4922]: I0218 13:00:31.429798 4922 generic.go:334] "Generic (PLEG): container finished" podID="77a43352-36b4-4006-b6ca-489c923eaf63" containerID="dabdb87ca65d6f8f0ea66c048ef4f9fa15f0c9dea5a0b549634e5eeaf236ae25" exitCode=0 Feb 18 13:00:31 crc kubenswrapper[4922]: I0218 13:00:31.430100 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/crc-debug-t2cmf" event={"ID":"77a43352-36b4-4006-b6ca-489c923eaf63","Type":"ContainerDied","Data":"dabdb87ca65d6f8f0ea66c048ef4f9fa15f0c9dea5a0b549634e5eeaf236ae25"} Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.711883 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.817954 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zkpx\" (UniqueName: \"kubernetes.io/projected/77a43352-36b4-4006-b6ca-489c923eaf63-kube-api-access-2zkpx\") pod \"77a43352-36b4-4006-b6ca-489c923eaf63\" (UID: \"77a43352-36b4-4006-b6ca-489c923eaf63\") " Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.818335 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77a43352-36b4-4006-b6ca-489c923eaf63-host\") pod \"77a43352-36b4-4006-b6ca-489c923eaf63\" (UID: \"77a43352-36b4-4006-b6ca-489c923eaf63\") " Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.818410 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77a43352-36b4-4006-b6ca-489c923eaf63-host" (OuterVolumeSpecName: "host") pod "77a43352-36b4-4006-b6ca-489c923eaf63" (UID: "77a43352-36b4-4006-b6ca-489c923eaf63"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.818860 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77a43352-36b4-4006-b6ca-489c923eaf63-host\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.824150 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a43352-36b4-4006-b6ca-489c923eaf63-kube-api-access-2zkpx" (OuterVolumeSpecName: "kube-api-access-2zkpx") pod "77a43352-36b4-4006-b6ca-489c923eaf63" (UID: "77a43352-36b4-4006-b6ca-489c923eaf63"). InnerVolumeSpecName "kube-api-access-2zkpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.879026 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bf65p/crc-debug-t2cmf"] Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.888928 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bf65p/crc-debug-t2cmf"] Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.920953 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zkpx\" (UniqueName: \"kubernetes.io/projected/77a43352-36b4-4006-b6ca-489c923eaf63-kube-api-access-2zkpx\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:32 crc kubenswrapper[4922]: I0218 13:00:32.983539 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a43352-36b4-4006-b6ca-489c923eaf63" path="/var/lib/kubelet/pods/77a43352-36b4-4006-b6ca-489c923eaf63/volumes" Feb 18 13:00:33 crc kubenswrapper[4922]: I0218 13:00:33.450525 4922 scope.go:117] "RemoveContainer" containerID="dabdb87ca65d6f8f0ea66c048ef4f9fa15f0c9dea5a0b549634e5eeaf236ae25" Feb 18 13:00:33 crc kubenswrapper[4922]: I0218 13:00:33.450662 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-t2cmf" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.039051 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bf65p/crc-debug-swr2w"] Feb 18 13:00:34 crc kubenswrapper[4922]: E0218 13:00:34.039953 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a43352-36b4-4006-b6ca-489c923eaf63" containerName="container-00" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.039971 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a43352-36b4-4006-b6ca-489c923eaf63" containerName="container-00" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.040196 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a43352-36b4-4006-b6ca-489c923eaf63" containerName="container-00" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.040901 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.143917 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th4g8\" (UniqueName: \"kubernetes.io/projected/186cacb8-5e9c-4617-80ae-a6e968fa421b-kube-api-access-th4g8\") pod \"crc-debug-swr2w\" (UID: \"186cacb8-5e9c-4617-80ae-a6e968fa421b\") " pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.143980 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/186cacb8-5e9c-4617-80ae-a6e968fa421b-host\") pod \"crc-debug-swr2w\" (UID: \"186cacb8-5e9c-4617-80ae-a6e968fa421b\") " pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.245849 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th4g8\" (UniqueName: \"kubernetes.io/projected/186cacb8-5e9c-4617-80ae-a6e968fa421b-kube-api-access-th4g8\") pod \"crc-debug-swr2w\" (UID: \"186cacb8-5e9c-4617-80ae-a6e968fa421b\") " pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.245907 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/186cacb8-5e9c-4617-80ae-a6e968fa421b-host\") pod \"crc-debug-swr2w\" (UID: \"186cacb8-5e9c-4617-80ae-a6e968fa421b\") " pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.246110 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/186cacb8-5e9c-4617-80ae-a6e968fa421b-host\") pod \"crc-debug-swr2w\" (UID: \"186cacb8-5e9c-4617-80ae-a6e968fa421b\") " pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.262918 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th4g8\" (UniqueName: \"kubernetes.io/projected/186cacb8-5e9c-4617-80ae-a6e968fa421b-kube-api-access-th4g8\") pod \"crc-debug-swr2w\" (UID: \"186cacb8-5e9c-4617-80ae-a6e968fa421b\") " pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.358968 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:34 crc kubenswrapper[4922]: I0218 13:00:34.460554 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/crc-debug-swr2w" event={"ID":"186cacb8-5e9c-4617-80ae-a6e968fa421b","Type":"ContainerStarted","Data":"326ba37b0375cb7b50735b154f30002943935d23cb145c6d19e89d30c20c8fa0"} Feb 18 13:00:35 crc kubenswrapper[4922]: I0218 13:00:35.477280 4922 generic.go:334] "Generic (PLEG): container finished" podID="186cacb8-5e9c-4617-80ae-a6e968fa421b" containerID="6115389c653a52211bb6d60a2ee801c77bf458f01c30b57fb3cbccdfadf9cac5" exitCode=0 Feb 18 13:00:35 crc kubenswrapper[4922]: I0218 13:00:35.477803 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/crc-debug-swr2w" event={"ID":"186cacb8-5e9c-4617-80ae-a6e968fa421b","Type":"ContainerDied","Data":"6115389c653a52211bb6d60a2ee801c77bf458f01c30b57fb3cbccdfadf9cac5"} Feb 18 13:00:35 crc kubenswrapper[4922]: I0218 13:00:35.518573 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bf65p/crc-debug-swr2w"] Feb 18 13:00:35 crc kubenswrapper[4922]: I0218 13:00:35.527927 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bf65p/crc-debug-swr2w"] Feb 18 13:00:36 crc kubenswrapper[4922]: I0218 13:00:36.612038 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:36 crc kubenswrapper[4922]: I0218 13:00:36.699693 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/186cacb8-5e9c-4617-80ae-a6e968fa421b-host\") pod \"186cacb8-5e9c-4617-80ae-a6e968fa421b\" (UID: \"186cacb8-5e9c-4617-80ae-a6e968fa421b\") " Feb 18 13:00:36 crc kubenswrapper[4922]: I0218 13:00:36.699907 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/186cacb8-5e9c-4617-80ae-a6e968fa421b-host" (OuterVolumeSpecName: "host") pod "186cacb8-5e9c-4617-80ae-a6e968fa421b" (UID: "186cacb8-5e9c-4617-80ae-a6e968fa421b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 13:00:36 crc kubenswrapper[4922]: I0218 13:00:36.700019 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th4g8\" (UniqueName: \"kubernetes.io/projected/186cacb8-5e9c-4617-80ae-a6e968fa421b-kube-api-access-th4g8\") pod \"186cacb8-5e9c-4617-80ae-a6e968fa421b\" (UID: \"186cacb8-5e9c-4617-80ae-a6e968fa421b\") " Feb 18 13:00:36 crc kubenswrapper[4922]: I0218 13:00:36.700595 4922 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/186cacb8-5e9c-4617-80ae-a6e968fa421b-host\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:36 crc kubenswrapper[4922]: I0218 13:00:36.709721 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/186cacb8-5e9c-4617-80ae-a6e968fa421b-kube-api-access-th4g8" (OuterVolumeSpecName: "kube-api-access-th4g8") pod "186cacb8-5e9c-4617-80ae-a6e968fa421b" (UID: "186cacb8-5e9c-4617-80ae-a6e968fa421b"). InnerVolumeSpecName "kube-api-access-th4g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:00:36 crc kubenswrapper[4922]: I0218 13:00:36.803118 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th4g8\" (UniqueName: \"kubernetes.io/projected/186cacb8-5e9c-4617-80ae-a6e968fa421b-kube-api-access-th4g8\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:36 crc kubenswrapper[4922]: I0218 13:00:36.984510 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="186cacb8-5e9c-4617-80ae-a6e968fa421b" path="/var/lib/kubelet/pods/186cacb8-5e9c-4617-80ae-a6e968fa421b/volumes" Feb 18 13:00:37 crc kubenswrapper[4922]: I0218 13:00:37.504778 4922 scope.go:117] "RemoveContainer" containerID="6115389c653a52211bb6d60a2ee801c77bf458f01c30b57fb3cbccdfadf9cac5" Feb 18 13:00:37 crc kubenswrapper[4922]: I0218 13:00:37.504930 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/crc-debug-swr2w" Feb 18 13:00:39 crc kubenswrapper[4922]: I0218 13:00:39.808023 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:00:39 crc kubenswrapper[4922]: I0218 13:00:39.808395 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.162731 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523661-8n69m"] Feb 18 13:01:00 crc kubenswrapper[4922]: E0218 13:01:00.163741 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="186cacb8-5e9c-4617-80ae-a6e968fa421b" containerName="container-00" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.163764 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="186cacb8-5e9c-4617-80ae-a6e968fa421b" containerName="container-00" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.164042 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="186cacb8-5e9c-4617-80ae-a6e968fa421b" containerName="container-00" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.164805 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.178167 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523661-8n69m"] Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.267676 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-config-data\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.267730 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-fernet-keys\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.267768 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-combined-ca-bundle\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.267976 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjblx\" (UniqueName: \"kubernetes.io/projected/d70ede78-e133-44f7-8df1-fd86bfc44d38-kube-api-access-sjblx\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.369978 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-config-data\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.370034 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-fernet-keys\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.370074 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-combined-ca-bundle\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.370131 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjblx\" (UniqueName: \"kubernetes.io/projected/d70ede78-e133-44f7-8df1-fd86bfc44d38-kube-api-access-sjblx\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.378845 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-combined-ca-bundle\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.378876 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-config-data\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.385275 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-fernet-keys\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.392637 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjblx\" (UniqueName: \"kubernetes.io/projected/d70ede78-e133-44f7-8df1-fd86bfc44d38-kube-api-access-sjblx\") pod \"keystone-cron-29523661-8n69m\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:00 crc kubenswrapper[4922]: I0218 13:01:00.501482 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.028593 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523661-8n69m"] Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.304173 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-794d859fd8-fbbnx_d8d3eec1-763e-4874-b2af-19401e383fed/barbican-api/0.log" Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.496886 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-794d859fd8-fbbnx_d8d3eec1-763e-4874-b2af-19401e383fed/barbican-api-log/0.log" Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.547501 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78995b5fcd-pmbbf_2664c9b6-f62a-4453-8771-8c273f5f9ec1/barbican-keystone-listener/0.log" Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.704377 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-78995b5fcd-pmbbf_2664c9b6-f62a-4453-8771-8c273f5f9ec1/barbican-keystone-listener-log/0.log" Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.746896 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523661-8n69m" event={"ID":"d70ede78-e133-44f7-8df1-fd86bfc44d38","Type":"ContainerStarted","Data":"d0f6fc0562afe44aa18fd1e63e1011ad503634a6e6172e5eb778e96a2afe56dd"} Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.746949 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523661-8n69m" event={"ID":"d70ede78-e133-44f7-8df1-fd86bfc44d38","Type":"ContainerStarted","Data":"e7b035461fd11d5f92f5e193670bb3215ebfae191541dbd8e9b48c5dd56d3796"} Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.773055 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523661-8n69m" podStartSLOduration=1.773032319 podStartE2EDuration="1.773032319s" podCreationTimestamp="2026-02-18 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 13:01:01.761394564 +0000 UTC m=+5063.489098654" watchObservedRunningTime="2026-02-18 13:01:01.773032319 +0000 UTC m=+5063.500736409" Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.782912 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-676bd4cb85-2ggtc_93be7893-0b89-4762-870d-f5878ecddb3b/barbican-worker/0.log" Feb 18 13:01:01 crc kubenswrapper[4922]: I0218 13:01:01.846837 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-676bd4cb85-2ggtc_93be7893-0b89-4762-870d-f5878ecddb3b/barbican-worker-log/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.040762 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5rdwv_2685dd3b-59b6-4879-b59a-215b187b1344/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.163166 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfc3cdcf-4513-4e18-8d43-c435fd877ae7/ceilometer-central-agent/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.233379 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfc3cdcf-4513-4e18-8d43-c435fd877ae7/ceilometer-notification-agent/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.259695 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfc3cdcf-4513-4e18-8d43-c435fd877ae7/proxy-httpd/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.324920 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfc3cdcf-4513-4e18-8d43-c435fd877ae7/sg-core/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.470280 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b897159b-9178-4f59-b254-08229460867d/cinder-api-log/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.544184 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b897159b-9178-4f59-b254-08229460867d/cinder-api/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.686817 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd3cd2cf-8780-4de2-925c-5385d6398e49/cinder-scheduler/0.log" Feb 18 13:01:02 crc kubenswrapper[4922]: I0218 13:01:02.741434 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_bd3cd2cf-8780-4de2-925c-5385d6398e49/probe/0.log" Feb 18 13:01:03 crc kubenswrapper[4922]: I0218 13:01:03.028091 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fcnzl_ff0774c0-eec1-4e8d-9368-8c3f26ec3d8d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:03 crc kubenswrapper[4922]: I0218 13:01:03.030051 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ws7ln_b2f62f96-5ba4-4d16-89d8-11ae5e941699/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:03 crc kubenswrapper[4922]: I0218 13:01:03.237036 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-zdlvc_d7048bd5-50d1-472a-a898-6cf57cf126d8/init/0.log" Feb 18 13:01:03 crc kubenswrapper[4922]: I0218 13:01:03.417024 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-zdlvc_d7048bd5-50d1-472a-a898-6cf57cf126d8/init/0.log" Feb 18 13:01:03 crc kubenswrapper[4922]: I0218 13:01:03.494277 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7b67m_28a59f5e-155a-44b9-827a-a48bf1615d3d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:03 crc kubenswrapper[4922]: I0218 13:01:03.665958 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-zdlvc_d7048bd5-50d1-472a-a898-6cf57cf126d8/dnsmasq-dns/0.log" Feb 18 13:01:03 crc kubenswrapper[4922]: I0218 13:01:03.771899 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5056168-d177-4e40-813a-db20d428ce9a/glance-log/0.log" Feb 18 13:01:03 crc kubenswrapper[4922]: I0218 13:01:03.790214 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5056168-d177-4e40-813a-db20d428ce9a/glance-httpd/0.log" Feb 18 13:01:04 crc kubenswrapper[4922]: I0218 13:01:04.045600 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_342c8bfd-c2d6-4afd-b2be-3e1474b63b62/glance-httpd/0.log" Feb 18 13:01:04 crc kubenswrapper[4922]: I0218 13:01:04.065656 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_342c8bfd-c2d6-4afd-b2be-3e1474b63b62/glance-log/0.log" Feb 18 13:01:04 crc kubenswrapper[4922]: I0218 13:01:04.266574 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bbf5454f6-d5958_3bc8759d-86ff-415d-936a-064ef742f0d9/horizon/0.log" Feb 18 13:01:04 crc kubenswrapper[4922]: I0218 13:01:04.408149 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bxngq_98bc83e7-66dd-4133-82cd-d4301c233f9d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:04 crc kubenswrapper[4922]: I0218 13:01:04.673407 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-sz5wh_c107695a-fdf7-48c6-b165-5e4dd2427148/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:04 crc kubenswrapper[4922]: I0218 13:01:04.853848 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7bbf5454f6-d5958_3bc8759d-86ff-415d-936a-064ef742f0d9/horizon-log/0.log" Feb 18 13:01:05 crc kubenswrapper[4922]: I0218 13:01:05.050586 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29523601-t5w2s_07d51aec-efff-44ea-b9c5-c5335f63e0f2/keystone-cron/0.log" Feb 18 13:01:05 crc kubenswrapper[4922]: I0218 13:01:05.236799 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29523661-8n69m_d70ede78-e133-44f7-8df1-fd86bfc44d38/keystone-cron/0.log" Feb 18 13:01:05 crc kubenswrapper[4922]: I0218 13:01:05.262984 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b854f8786-pls2t_2efd0609-4858-47ce-8213-6a74510e8acf/keystone-api/0.log" Feb 18 13:01:05 crc kubenswrapper[4922]: I0218 13:01:05.367080 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1b492a6f-c8fc-4a76-8645-9f94a29d5e6b/kube-state-metrics/0.log" Feb 18 13:01:05 crc kubenswrapper[4922]: I0218 13:01:05.524980 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k2xhf_7d136111-09bf-46fe-aaf8-868a27741f9b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:06 crc kubenswrapper[4922]: I0218 13:01:06.143016 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f57669c89-7wt5g_49aa13b6-3343-43d5-949e-3118c1711ed0/neutron-api/0.log" Feb 18 13:01:06 crc kubenswrapper[4922]: I0218 13:01:06.192183 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-25nz6_9a5605fd-9ccc-45d2-bd39-7b9a6b8e36d3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:06 crc kubenswrapper[4922]: I0218 13:01:06.193992 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f57669c89-7wt5g_49aa13b6-3343-43d5-949e-3118c1711ed0/neutron-httpd/0.log" Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.038020 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4a95479a-1834-4e95-b18a-c0bcef05f7ed/nova-cell0-conductor-conductor/0.log" Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.165863 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_31ef9a9b-fedd-4afd-8582-19ef097c98a2/nova-cell1-conductor-conductor/0.log" Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.567057 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5f598a92-b7cc-4584-9a17-d4c6d031ceeb/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.671045 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b05385b6-6350-4ee0-b628-a1eb55dd6067/nova-api-log/0.log" Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.818183 4922 generic.go:334] "Generic (PLEG): container finished" podID="d70ede78-e133-44f7-8df1-fd86bfc44d38" containerID="d0f6fc0562afe44aa18fd1e63e1011ad503634a6e6172e5eb778e96a2afe56dd" exitCode=0 Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.818236 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523661-8n69m" event={"ID":"d70ede78-e133-44f7-8df1-fd86bfc44d38","Type":"ContainerDied","Data":"d0f6fc0562afe44aa18fd1e63e1011ad503634a6e6172e5eb778e96a2afe56dd"} Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.939463 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b05385b6-6350-4ee0-b628-a1eb55dd6067/nova-api-api/0.log" Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.951407 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-hswp7_6e9e482a-c85e-473f-b848-e6fb6ba6afcd/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:07 crc kubenswrapper[4922]: I0218 13:01:07.990760 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6/nova-metadata-log/0.log" Feb 18 13:01:08 crc kubenswrapper[4922]: I0218 13:01:08.814955 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_873b23d0-3c83-4ab7-8178-1c4832c544a0/mysql-bootstrap/0.log" Feb 18 13:01:08 crc kubenswrapper[4922]: I0218 13:01:08.990375 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_873b23d0-3c83-4ab7-8178-1c4832c544a0/mysql-bootstrap/0.log" Feb 18 13:01:08 crc kubenswrapper[4922]: I0218 13:01:08.993181 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7319f7de-4554-4a03-ba7f-c0f414ab2fe5/nova-scheduler-scheduler/0.log" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.067528 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_873b23d0-3c83-4ab7-8178-1c4832c544a0/galera/0.log" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.221660 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.250206 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_302e3b56-c5a4-4e80-bb7e-a9e6a61a119e/mysql-bootstrap/0.log" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.348409 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-config-data\") pod \"d70ede78-e133-44f7-8df1-fd86bfc44d38\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.348452 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-fernet-keys\") pod \"d70ede78-e133-44f7-8df1-fd86bfc44d38\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.348591 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjblx\" (UniqueName: \"kubernetes.io/projected/d70ede78-e133-44f7-8df1-fd86bfc44d38-kube-api-access-sjblx\") pod \"d70ede78-e133-44f7-8df1-fd86bfc44d38\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.348628 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-combined-ca-bundle\") pod \"d70ede78-e133-44f7-8df1-fd86bfc44d38\" (UID: \"d70ede78-e133-44f7-8df1-fd86bfc44d38\") " Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.372089 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70ede78-e133-44f7-8df1-fd86bfc44d38-kube-api-access-sjblx" (OuterVolumeSpecName: "kube-api-access-sjblx") pod "d70ede78-e133-44f7-8df1-fd86bfc44d38" (UID: "d70ede78-e133-44f7-8df1-fd86bfc44d38"). InnerVolumeSpecName "kube-api-access-sjblx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.380774 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d70ede78-e133-44f7-8df1-fd86bfc44d38" (UID: "d70ede78-e133-44f7-8df1-fd86bfc44d38"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.450724 4922 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.452446 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjblx\" (UniqueName: \"kubernetes.io/projected/d70ede78-e133-44f7-8df1-fd86bfc44d38-kube-api-access-sjblx\") on node \"crc\" DevicePath \"\"" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.495222 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d70ede78-e133-44f7-8df1-fd86bfc44d38" (UID: "d70ede78-e133-44f7-8df1-fd86bfc44d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.522875 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-config-data" (OuterVolumeSpecName: "config-data") pod "d70ede78-e133-44f7-8df1-fd86bfc44d38" (UID: "d70ede78-e133-44f7-8df1-fd86bfc44d38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.554297 4922 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.554342 4922 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70ede78-e133-44f7-8df1-fd86bfc44d38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.654022 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_302e3b56-c5a4-4e80-bb7e-a9e6a61a119e/mysql-bootstrap/0.log" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.732931 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_302e3b56-c5a4-4e80-bb7e-a9e6a61a119e/galera/0.log" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.806869 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.807132 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.807235 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.808088 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3d670811b88086243e0421af6023cabe2c743133e3bc783d1903c4468a2a91f"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.808246 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://a3d670811b88086243e0421af6023cabe2c743133e3bc783d1903c4468a2a91f" gracePeriod=600 Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.839315 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523661-8n69m" event={"ID":"d70ede78-e133-44f7-8df1-fd86bfc44d38","Type":"ContainerDied","Data":"e7b035461fd11d5f92f5e193670bb3215ebfae191541dbd8e9b48c5dd56d3796"} Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.839386 4922 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b035461fd11d5f92f5e193670bb3215ebfae191541dbd8e9b48c5dd56d3796" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.839498 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523661-8n69m" Feb 18 13:01:09 crc kubenswrapper[4922]: I0218 13:01:09.887259 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_245b1cb9-d98f-4875-adf6-ab887f76849d/openstackclient/0.log" Feb 18 13:01:10 crc kubenswrapper[4922]: I0218 13:01:10.059437 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3ad5f860-e8c4-4b02-bd20-cd4d19ba20a6/nova-metadata-metadata/0.log" Feb 18 13:01:10 crc kubenswrapper[4922]: I0218 13:01:10.613936 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-996pg_a2d0a226-07e2-402d-a868-2f8374670dac/ovn-controller/0.log" Feb 18 13:01:10 crc kubenswrapper[4922]: I0218 13:01:10.667888 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wknpt_c831c6ce-ca0c-4f7d-8268-b4efe13e687d/openstack-network-exporter/0.log" Feb 18 13:01:10 crc kubenswrapper[4922]: I0218 13:01:10.850284 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="a3d670811b88086243e0421af6023cabe2c743133e3bc783d1903c4468a2a91f" exitCode=0 Feb 18 13:01:10 crc kubenswrapper[4922]: I0218 13:01:10.850340 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"a3d670811b88086243e0421af6023cabe2c743133e3bc783d1903c4468a2a91f"} Feb 18 13:01:10 crc kubenswrapper[4922]: I0218 13:01:10.850398 4922 scope.go:117] "RemoveContainer" containerID="5cb544d05c9692e1a613f94aebafc80d5e6b64a4e004e82767f3ecc7c8a1d324" Feb 18 13:01:10 crc kubenswrapper[4922]: I0218 13:01:10.869872 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-stvc7_cf286fe0-1b17-475a-b71b-ac4897c2f59d/ovsdb-server-init/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.065523 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-stvc7_cf286fe0-1b17-475a-b71b-ac4897c2f59d/ovsdb-server/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.107926 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-stvc7_cf286fe0-1b17-475a-b71b-ac4897c2f59d/ovs-vswitchd/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.122964 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-stvc7_cf286fe0-1b17-475a-b71b-ac4897c2f59d/ovsdb-server-init/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.337535 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8jmtt_45d322f9-bf52-4679-ab43-9d222bc09a14/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.345689 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09bbc755-2862-437b-9ef3-515103f77710/openstack-network-exporter/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.433889 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_09bbc755-2862-437b-9ef3-515103f77710/ovn-northd/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.695289 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b6f3b4f2-3f65-4278-9cd0-753adfee2ecd/ovsdbserver-nb/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.700135 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b6f3b4f2-3f65-4278-9cd0-753adfee2ecd/openstack-network-exporter/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.881326 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003"} Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.891917 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_186f064b-a9e8-4637-a5eb-1646f2e1a783/openstack-network-exporter/0.log" Feb 18 13:01:11 crc kubenswrapper[4922]: I0218 13:01:11.919545 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_186f064b-a9e8-4637-a5eb-1646f2e1a783/ovsdbserver-sb/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.146155 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7c7b84785b-f8lmj_280ad3f5-10de-4dc8-866b-c7502c004835/placement-api/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.211057 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_13358646-85fa-4761-b4e8-ce5baf8851da/init-config-reloader/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.323581 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7c7b84785b-f8lmj_280ad3f5-10de-4dc8-866b-c7502c004835/placement-log/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.488776 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_13358646-85fa-4761-b4e8-ce5baf8851da/config-reloader/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.522041 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_13358646-85fa-4761-b4e8-ce5baf8851da/prometheus/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.531189 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_13358646-85fa-4761-b4e8-ce5baf8851da/thanos-sidecar/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.734180 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_13358646-85fa-4761-b4e8-ce5baf8851da/init-config-reloader/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.742850 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9eb7dcb0-20c5-414c-bc86-58461654bcb5/setup-container/0.log" Feb 18 13:01:12 crc kubenswrapper[4922]: I0218 13:01:12.922741 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9eb7dcb0-20c5-414c-bc86-58461654bcb5/setup-container/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:12.999998 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bb934d91-0203-48d1-be6a-ab13e821993d/setup-container/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:13.014821 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9eb7dcb0-20c5-414c-bc86-58461654bcb5/rabbitmq/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:13.301501 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bb934d91-0203-48d1-be6a-ab13e821993d/setup-container/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:13.345485 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lwp7z_fac1ed4a-2fa4-4220-80fb-f54e3a357fb9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:13.409153 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bb934d91-0203-48d1-be6a-ab13e821993d/rabbitmq/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:13.617481 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-h7qdx_30aa9b56-28ab-4d32-beb5-965876a6e243/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:13.627152 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zs9qz_08ba745d-df3b-42c0-a384-ca64c96dd47f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:13.851000 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-b5h25_227ab888-976c-4ce1-beb8-abbe305c6d79/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:13 crc kubenswrapper[4922]: I0218 13:01:13.869391 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cz4px_c2fa843a-470e-441c-93c9-8c412459933b/ssh-known-hosts-edpm-deployment/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.153326 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6bb9876df9-jt7kg_8cc5cf6d-c722-42a3-8389-b991e77d1bbf/proxy-server/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.295910 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6bb9876df9-jt7kg_8cc5cf6d-c722-42a3-8389-b991e77d1bbf/proxy-httpd/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.378373 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-6gzbs_83fbf909-70fe-4d3c-9b45-3f5a6733779c/swift-ring-rebalance/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.495257 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/account-auditor/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.525487 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/account-reaper/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.602596 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/account-replicator/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.674038 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/account-server/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.762445 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/container-auditor/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.808597 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/container-replicator/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.848910 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/container-server/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.917076 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/container-updater/0.log" Feb 18 13:01:14 crc kubenswrapper[4922]: I0218 13:01:14.976472 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/object-auditor/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.057067 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/object-expirer/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.140232 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/object-replicator/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.163405 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/object-server/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.197084 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/object-updater/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.282733 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/rsync/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.348004 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0771bdc1-7622-4a65-aa82-3150630ce652/swift-recon-cron/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.544015 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-57sjs_0c5871a2-bb79-4b43-a830-7714fa7d8241/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.662346 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4525818f-9e1d-48a0-8ec1-1a22a0841dd4/tempest-tests-tempest-tests-runner/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.818952 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fc2833dd-ab51-414c-9ce3-ed8078989ea5/test-operator-logs-container/0.log" Feb 18 13:01:15 crc kubenswrapper[4922]: I0218 13:01:15.931866 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-hprwc_353e7c86-6842-40e4-ac3d-e2032eef15c5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 13:01:16 crc kubenswrapper[4922]: I0218 13:01:16.662533 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_cd84d8c9-0a98-4f6b-b6da-887f4d294a38/watcher-applier/0.log" Feb 18 13:01:17 crc kubenswrapper[4922]: I0218 13:01:17.146495 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_43b1edea-6c95-42ae-b30a-d3ce2eb1e0de/watcher-api-log/0.log" Feb 18 13:01:17 crc kubenswrapper[4922]: I0218 13:01:17.804606 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_3df41ae7-b237-49e2-902c-f33e693f5db9/watcher-decision-engine/0.log" Feb 18 13:01:20 crc kubenswrapper[4922]: I0218 13:01:20.228816 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_43b1edea-6c95-42ae-b30a-d3ce2eb1e0de/watcher-api/0.log" Feb 18 13:01:21 crc kubenswrapper[4922]: I0218 13:01:21.019455 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0ce20f52-4b9d-47a6-8da7-c64cd1d15623/memcached/0.log" Feb 18 13:01:45 crc kubenswrapper[4922]: I0218 13:01:45.334667 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv_83694df8-b6fe-4913-8f73-d53972c81f36/util/0.log" Feb 18 13:01:45 crc kubenswrapper[4922]: I0218 13:01:45.545854 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv_83694df8-b6fe-4913-8f73-d53972c81f36/pull/0.log" Feb 18 13:01:45 crc kubenswrapper[4922]: I0218 13:01:45.554087 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv_83694df8-b6fe-4913-8f73-d53972c81f36/util/0.log" Feb 18 13:01:45 crc kubenswrapper[4922]: I0218 13:01:45.598690 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv_83694df8-b6fe-4913-8f73-d53972c81f36/pull/0.log" Feb 18 13:01:45 crc kubenswrapper[4922]: I0218 13:01:45.805249 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv_83694df8-b6fe-4913-8f73-d53972c81f36/extract/0.log" Feb 18 13:01:45 crc kubenswrapper[4922]: I0218 13:01:45.838317 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv_83694df8-b6fe-4913-8f73-d53972c81f36/pull/0.log" Feb 18 13:01:45 crc kubenswrapper[4922]: I0218 13:01:45.889905 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_14461323130afa0a42cfbeae9371df3c6df69f0e6e31cfc4d749781171nvjzv_83694df8-b6fe-4913-8f73-d53972c81f36/util/0.log" Feb 18 13:01:46 crc kubenswrapper[4922]: I0218 13:01:46.416843 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-2ncv8_01766bee-50bd-4dcb-9b3d-831486ddeaf4/manager/0.log" Feb 18 13:01:46 crc kubenswrapper[4922]: I0218 13:01:46.867426 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-bnvrn_4c9af0bf-50d7-42ef-a8df-241b5ec63f5a/manager/0.log" Feb 18 13:01:47 crc kubenswrapper[4922]: I0218 13:01:47.290127 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-qm24h_51cd14ee-9b8a-421f-80bb-d208b752079d/manager/0.log" Feb 18 13:01:47 crc kubenswrapper[4922]: I0218 13:01:47.594215 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-82hvr_0032092e-84ca-426d-8f15-5141f4a8da20/manager/0.log" Feb 18 13:01:48 crc kubenswrapper[4922]: I0218 13:01:48.759658 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-r4v59_7753280d-fc59-4887-9d87-a2cfd83e7ba9/manager/0.log" Feb 18 13:01:48 crc kubenswrapper[4922]: I0218 13:01:48.881187 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-krt25_3c16d873-1097-4f56-913f-cc366ed34c23/manager/0.log" Feb 18 13:01:49 crc kubenswrapper[4922]: I0218 13:01:49.120472 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-6z2cq_61f73f1d-e472-411e-adc0-6755c47aa72b/manager/0.log" Feb 18 13:01:49 crc kubenswrapper[4922]: I0218 13:01:49.212291 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-jtfzr_324031ff-ceae-4065-9955-fd5745647cb0/manager/0.log" Feb 18 13:01:49 crc kubenswrapper[4922]: I0218 13:01:49.365240 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-c597h_2936db6d-8a5b-4da8-9e52-e508a6e757fe/manager/0.log" Feb 18 13:01:50 crc kubenswrapper[4922]: I0218 13:01:50.166898 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-gwbk7_a7487625-0c9e-4396-8eb8-5840ce4344c8/manager/0.log" Feb 18 13:01:50 crc kubenswrapper[4922]: I0218 13:01:50.184190 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-tn47v_0a8811b6-4023-427d-a893-628e0dd338e8/manager/0.log" Feb 18 13:01:50 crc kubenswrapper[4922]: I0218 13:01:50.503343 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-wrd8w_90b4a58a-81d7-4129-8f45-5429e963676e/manager/0.log" Feb 18 13:01:50 crc kubenswrapper[4922]: I0218 13:01:50.608663 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cxfsr7_081d9ec7-e338-437a-b3bc-af9b788db66a/manager/0.log" Feb 18 13:01:50 crc kubenswrapper[4922]: I0218 13:01:50.929750 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f8b4c896c-mdz6v_51a617b6-1c84-446a-a342-bd0687227c0c/operator/0.log" Feb 18 13:01:51 crc kubenswrapper[4922]: I0218 13:01:51.190831 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8rrxt_191b8ec5-c4e8-4e8c-92c2-fa2fd655f94a/registry-server/0.log" Feb 18 13:01:51 crc kubenswrapper[4922]: I0218 13:01:51.463222 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-z7pdl_42271b89-6aba-4e15-a2a1-856b656a1b6e/manager/0.log" Feb 18 13:01:51 crc kubenswrapper[4922]: I0218 13:01:51.881956 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-hddmr_66351682-3cdf-41cc-80d9-0bbb020144d2/manager/0.log" Feb 18 13:01:52 crc kubenswrapper[4922]: I0218 13:01:52.140916 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-98zrv_69ef021e-1b46-4aeb-8023-93f6fb366396/operator/0.log" Feb 18 13:01:52 crc kubenswrapper[4922]: I0218 13:01:52.405511 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-2bk9r_183b09db-ca5a-4aa1-b87b-908de4dc44ff/manager/0.log" Feb 18 13:01:52 crc kubenswrapper[4922]: I0218 13:01:52.935684 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-btlqf_387afbf1-afa5-414c-a22a-83a6a8197ff7/manager/0.log" Feb 18 13:01:53 crc kubenswrapper[4922]: I0218 13:01:53.112278 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-xdrrr_52123256-1372-49b6-80ed-c3112d14a8fa/manager/0.log" Feb 18 13:01:53 crc kubenswrapper[4922]: I0218 13:01:53.367018 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8685d86d55-pbbl7_d81b14bf-a056-4780-af1a-bf38babee5b3/manager/0.log" Feb 18 13:01:53 crc kubenswrapper[4922]: I0218 13:01:53.489886 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5689f5d7c4-95x8t_4c487619-568f-44a0-9d23-037794ada114/manager/0.log" Feb 18 13:01:53 crc kubenswrapper[4922]: I0218 13:01:53.615030 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-4fm4m_8eae5053-64f3-401a-a151-dbf22f30a845/manager/0.log" Feb 18 13:01:58 crc kubenswrapper[4922]: I0218 13:01:58.655199 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-f8lbk_ae81863a-2778-4505-9106-c850f873a75d/manager/0.log" Feb 18 13:02:15 crc kubenswrapper[4922]: I0218 13:02:15.207170 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-b6msr_adf4d88c-a19b-49bf-bb62-eef23b55efae/control-plane-machine-set-operator/0.log" Feb 18 13:02:15 crc kubenswrapper[4922]: I0218 13:02:15.385340 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5sz92_4a5c3121-2765-47df-aa3f-22595e4b4ea9/kube-rbac-proxy/0.log" Feb 18 13:02:15 crc kubenswrapper[4922]: I0218 13:02:15.427296 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5sz92_4a5c3121-2765-47df-aa3f-22595e4b4ea9/machine-api-operator/0.log" Feb 18 13:02:28 crc kubenswrapper[4922]: I0218 13:02:28.007700 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-tq4pt_906da7e7-ffe0-496f-bfb4-a76c2c14589e/cert-manager-controller/0.log" Feb 18 13:02:28 crc kubenswrapper[4922]: I0218 13:02:28.133792 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-wlvsw_c066ada8-ed3e-4ca8-ba77-2c8c9014fdb1/cert-manager-cainjector/0.log" Feb 18 13:02:28 crc kubenswrapper[4922]: I0218 13:02:28.196211 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-vvgzd_04a66d89-6415-45c5-b87b-b3730678eac4/cert-manager-webhook/0.log" Feb 18 13:02:40 crc kubenswrapper[4922]: I0218 13:02:40.949338 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-8x52x_8a41aeaf-5b15-4c8c-8abc-ad77b8e33896/nmstate-console-plugin/0.log" Feb 18 13:02:41 crc kubenswrapper[4922]: I0218 13:02:41.118937 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xgmj2_ea43ed5b-6735-4fd5-8fc5-1a01dcaeea01/nmstate-handler/0.log" Feb 18 13:02:41 crc kubenswrapper[4922]: I0218 13:02:41.140213 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-2dtql_4e3e71a0-5178-4016-853d-0d0c31563d99/kube-rbac-proxy/0.log" Feb 18 13:02:41 crc kubenswrapper[4922]: I0218 13:02:41.264685 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-2dtql_4e3e71a0-5178-4016-853d-0d0c31563d99/nmstate-metrics/0.log" Feb 18 13:02:41 crc kubenswrapper[4922]: I0218 13:02:41.301890 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-p7vsx_578f51b2-8e78-4720-93f6-7cd9ce17e2ed/nmstate-operator/0.log" Feb 18 13:02:41 crc kubenswrapper[4922]: I0218 13:02:41.464312 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-7mvdv_df5bbc9b-9ba2-416b-93db-c4f6155b6906/nmstate-webhook/0.log" Feb 18 13:02:54 crc kubenswrapper[4922]: I0218 13:02:54.670252 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cq76p_1446ef26-f977-4255-a1b2-a42e8107303e/prometheus-operator/0.log" Feb 18 13:02:54 crc kubenswrapper[4922]: I0218 13:02:54.864113 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5_879d4ddb-47d1-4987-a980-e9f05104e5cb/prometheus-operator-admission-webhook/0.log" Feb 18 13:02:54 crc kubenswrapper[4922]: I0218 13:02:54.933056 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p_333644cd-a424-47a3-b701-378149dcdc80/prometheus-operator-admission-webhook/0.log" Feb 18 13:02:55 crc kubenswrapper[4922]: I0218 13:02:55.139857 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tkz2d_10c40ab6-7b55-410d-958e-3a6a37818c88/operator/0.log" Feb 18 13:02:55 crc kubenswrapper[4922]: I0218 13:02:55.146965 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-mh85w_20e893d8-cc0c-4bdf-83d6-698e08e5d82b/perses-operator/0.log" Feb 18 13:03:08 crc kubenswrapper[4922]: I0218 13:03:08.938218 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-8ds4f_4e80d896-3eb4-4dc8-b217-441a5a09dd05/kube-rbac-proxy/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.132023 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-8ds4f_4e80d896-3eb4-4dc8-b217-441a5a09dd05/controller/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.255966 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-frr-files/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.388950 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-frr-files/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.424401 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-metrics/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.471021 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-reloader/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.488189 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-reloader/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.648680 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-frr-files/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.673840 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-metrics/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.681977 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-metrics/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.692892 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-reloader/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.862525 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-metrics/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.867142 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-reloader/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.880280 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/controller/0.log" Feb 18 13:03:09 crc kubenswrapper[4922]: I0218 13:03:09.882830 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/cp-frr-files/0.log" Feb 18 13:03:10 crc kubenswrapper[4922]: I0218 13:03:10.082863 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/frr-metrics/0.log" Feb 18 13:03:10 crc kubenswrapper[4922]: I0218 13:03:10.132443 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/kube-rbac-proxy-frr/0.log" Feb 18 13:03:10 crc kubenswrapper[4922]: I0218 13:03:10.133985 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/kube-rbac-proxy/0.log" Feb 18 13:03:10 crc kubenswrapper[4922]: I0218 13:03:10.297499 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/reloader/0.log" Feb 18 13:03:10 crc kubenswrapper[4922]: I0218 13:03:10.852876 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-9cn2f_74e84ee6-9d14-48aa-9e59-f1ee46e15fcf/frr-k8s-webhook-server/0.log" Feb 18 13:03:10 crc kubenswrapper[4922]: I0218 13:03:10.958257 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-576949b4c-vwcqv_9fbb7bfe-c8d9-4a50-9326-bf07e99f4336/manager/0.log" Feb 18 13:03:11 crc kubenswrapper[4922]: I0218 13:03:11.173423 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6d8c5554f7-psxr7_7c9c6b01-e766-411c-a275-ae7ea3a9659e/webhook-server/0.log" Feb 18 13:03:11 crc kubenswrapper[4922]: I0218 13:03:11.294481 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7rvcx_aa729491-0a34-4772-8178-d8566c355add/kube-rbac-proxy/0.log" Feb 18 13:03:11 crc kubenswrapper[4922]: I0218 13:03:11.783249 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fwwpd_d069bacc-29a2-4aeb-9437-e654621c73c8/frr/0.log" Feb 18 13:03:11 crc kubenswrapper[4922]: I0218 13:03:11.885574 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7rvcx_aa729491-0a34-4772-8178-d8566c355add/speaker/0.log" Feb 18 13:03:23 crc kubenswrapper[4922]: I0218 13:03:23.556800 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_00938d04-ee62-4756-830e-f66e2fbaab9d/util/0.log" Feb 18 13:03:23 crc kubenswrapper[4922]: I0218 13:03:23.668607 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_00938d04-ee62-4756-830e-f66e2fbaab9d/util/0.log" Feb 18 13:03:23 crc kubenswrapper[4922]: I0218 13:03:23.726636 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_00938d04-ee62-4756-830e-f66e2fbaab9d/pull/0.log" Feb 18 13:03:23 crc kubenswrapper[4922]: I0218 13:03:23.759157 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_00938d04-ee62-4756-830e-f66e2fbaab9d/pull/0.log" Feb 18 13:03:23 crc kubenswrapper[4922]: I0218 13:03:23.916332 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_00938d04-ee62-4756-830e-f66e2fbaab9d/util/0.log" Feb 18 13:03:23 crc kubenswrapper[4922]: I0218 13:03:23.919607 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_00938d04-ee62-4756-830e-f66e2fbaab9d/extract/0.log" Feb 18 13:03:23 crc kubenswrapper[4922]: I0218 13:03:23.930014 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08l9tnp_00938d04-ee62-4756-830e-f66e2fbaab9d/pull/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.074137 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4_188679bc-8b67-4136-94ce-fa515c1c950a/util/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.244765 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4_188679bc-8b67-4136-94ce-fa515c1c950a/pull/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.259484 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4_188679bc-8b67-4136-94ce-fa515c1c950a/util/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.270257 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4_188679bc-8b67-4136-94ce-fa515c1c950a/pull/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.407144 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4_188679bc-8b67-4136-94ce-fa515c1c950a/pull/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.437561 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4_188679bc-8b67-4136-94ce-fa515c1c950a/util/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.465676 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b59p4_188679bc-8b67-4136-94ce-fa515c1c950a/extract/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.585190 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cfw5z_523054ef-f8bb-4c7d-9baa-47191e299fcd/extract-utilities/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.723327 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cfw5z_523054ef-f8bb-4c7d-9baa-47191e299fcd/extract-content/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.739980 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cfw5z_523054ef-f8bb-4c7d-9baa-47191e299fcd/extract-utilities/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.748840 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cfw5z_523054ef-f8bb-4c7d-9baa-47191e299fcd/extract-content/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.920102 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cfw5z_523054ef-f8bb-4c7d-9baa-47191e299fcd/extract-content/0.log" Feb 18 13:03:24 crc kubenswrapper[4922]: I0218 13:03:24.926293 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cfw5z_523054ef-f8bb-4c7d-9baa-47191e299fcd/extract-utilities/0.log" Feb 18 13:03:25 crc kubenswrapper[4922]: I0218 13:03:25.100271 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h9zn6_f779c873-d525-428d-88ed-828d00bf17eb/extract-utilities/0.log" Feb 18 13:03:25 crc kubenswrapper[4922]: I0218 13:03:25.352528 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h9zn6_f779c873-d525-428d-88ed-828d00bf17eb/extract-utilities/0.log" Feb 18 13:03:25 crc kubenswrapper[4922]: I0218 13:03:25.396426 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h9zn6_f779c873-d525-428d-88ed-828d00bf17eb/extract-content/0.log" Feb 18 13:03:25 crc kubenswrapper[4922]: I0218 13:03:25.411579 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h9zn6_f779c873-d525-428d-88ed-828d00bf17eb/extract-content/0.log" Feb 18 13:03:25 crc kubenswrapper[4922]: I0218 13:03:25.625578 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h9zn6_f779c873-d525-428d-88ed-828d00bf17eb/extract-utilities/0.log" Feb 18 13:03:25 crc kubenswrapper[4922]: I0218 13:03:25.636333 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h9zn6_f779c873-d525-428d-88ed-828d00bf17eb/extract-content/0.log" Feb 18 13:03:25 crc kubenswrapper[4922]: I0218 13:03:25.730126 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-cfw5z_523054ef-f8bb-4c7d-9baa-47191e299fcd/registry-server/0.log" Feb 18 13:03:25 crc kubenswrapper[4922]: I0218 13:03:25.865489 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846_b19cf8eb-c4e0-42a2-bc33-246e5c756bda/util/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.093551 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846_b19cf8eb-c4e0-42a2-bc33-246e5c756bda/pull/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.093932 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846_b19cf8eb-c4e0-42a2-bc33-246e5c756bda/util/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.166112 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846_b19cf8eb-c4e0-42a2-bc33-246e5c756bda/pull/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.351804 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846_b19cf8eb-c4e0-42a2-bc33-246e5c756bda/pull/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.371759 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846_b19cf8eb-c4e0-42a2-bc33-246e5c756bda/util/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.449029 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatx846_b19cf8eb-c4e0-42a2-bc33-246e5c756bda/extract/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.525734 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-h9zn6_f779c873-d525-428d-88ed-828d00bf17eb/registry-server/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.598941 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gjc8w_452cdbd0-d1e1-491a-8edd-d0f88f602364/marketplace-operator/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.743595 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gsthg_0c2d2657-497c-4512-97ff-be630635c1df/extract-utilities/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.904594 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gsthg_0c2d2657-497c-4512-97ff-be630635c1df/extract-utilities/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.919687 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gsthg_0c2d2657-497c-4512-97ff-be630635c1df/extract-content/0.log" Feb 18 13:03:26 crc kubenswrapper[4922]: I0218 13:03:26.919675 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gsthg_0c2d2657-497c-4512-97ff-be630635c1df/extract-content/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.065866 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gsthg_0c2d2657-497c-4512-97ff-be630635c1df/extract-utilities/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.124215 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gsthg_0c2d2657-497c-4512-97ff-be630635c1df/extract-content/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.164275 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gsthg_0c2d2657-497c-4512-97ff-be630635c1df/registry-server/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.289218 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n6xrc_5e74836e-69fc-4faa-ac09-05926ad4810a/extract-utilities/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.459487 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n6xrc_5e74836e-69fc-4faa-ac09-05926ad4810a/extract-content/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.463168 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n6xrc_5e74836e-69fc-4faa-ac09-05926ad4810a/extract-utilities/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.470696 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n6xrc_5e74836e-69fc-4faa-ac09-05926ad4810a/extract-content/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.665502 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n6xrc_5e74836e-69fc-4faa-ac09-05926ad4810a/extract-utilities/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.680340 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n6xrc_5e74836e-69fc-4faa-ac09-05926ad4810a/extract-content/0.log" Feb 18 13:03:27 crc kubenswrapper[4922]: I0218 13:03:27.963687 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-n6xrc_5e74836e-69fc-4faa-ac09-05926ad4810a/registry-server/0.log" Feb 18 13:03:39 crc kubenswrapper[4922]: I0218 13:03:39.564175 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-755d44f7c-bqmb5_879d4ddb-47d1-4987-a980-e9f05104e5cb/prometheus-operator-admission-webhook/0.log" Feb 18 13:03:39 crc kubenswrapper[4922]: I0218 13:03:39.576662 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cq76p_1446ef26-f977-4255-a1b2-a42e8107303e/prometheus-operator/0.log" Feb 18 13:03:39 crc kubenswrapper[4922]: I0218 13:03:39.644531 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-755d44f7c-x5s2p_333644cd-a424-47a3-b701-378149dcdc80/prometheus-operator-admission-webhook/0.log" Feb 18 13:03:39 crc kubenswrapper[4922]: I0218 13:03:39.771638 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-mh85w_20e893d8-cc0c-4bdf-83d6-698e08e5d82b/perses-operator/0.log" Feb 18 13:03:39 crc kubenswrapper[4922]: I0218 13:03:39.788211 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tkz2d_10c40ab6-7b55-410d-958e-3a6a37818c88/operator/0.log" Feb 18 13:03:39 crc kubenswrapper[4922]: I0218 13:03:39.807003 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:03:39 crc kubenswrapper[4922]: I0218 13:03:39.807068 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:04:09 crc kubenswrapper[4922]: I0218 13:04:09.808574 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:04:09 crc kubenswrapper[4922]: I0218 13:04:09.809064 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:04:12 crc kubenswrapper[4922]: I0218 13:04:12.756732 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-99kzq"] Feb 18 13:04:12 crc kubenswrapper[4922]: E0218 13:04:12.757505 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70ede78-e133-44f7-8df1-fd86bfc44d38" containerName="keystone-cron" Feb 18 13:04:12 crc kubenswrapper[4922]: I0218 13:04:12.757524 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70ede78-e133-44f7-8df1-fd86bfc44d38" containerName="keystone-cron" Feb 18 13:04:12 crc kubenswrapper[4922]: I0218 13:04:12.757771 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70ede78-e133-44f7-8df1-fd86bfc44d38" containerName="keystone-cron" Feb 18 13:04:12 crc kubenswrapper[4922]: I0218 13:04:12.759451 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:12 crc kubenswrapper[4922]: I0218 13:04:12.781141 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99kzq"] Feb 18 13:04:12 crc kubenswrapper[4922]: I0218 13:04:12.914348 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvfr\" (UniqueName: \"kubernetes.io/projected/b4669bde-5144-4129-8236-f152c6a30cad-kube-api-access-bkvfr\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:12 crc kubenswrapper[4922]: I0218 13:04:12.914460 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-utilities\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:12 crc kubenswrapper[4922]: I0218 13:04:12.914507 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-catalog-content\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:13 crc kubenswrapper[4922]: I0218 13:04:13.016625 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkvfr\" (UniqueName: \"kubernetes.io/projected/b4669bde-5144-4129-8236-f152c6a30cad-kube-api-access-bkvfr\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:13 crc kubenswrapper[4922]: I0218 13:04:13.016699 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-utilities\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:13 crc kubenswrapper[4922]: I0218 13:04:13.016743 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-catalog-content\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:13 crc kubenswrapper[4922]: I0218 13:04:13.017170 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-utilities\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:13 crc kubenswrapper[4922]: I0218 13:04:13.017676 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-catalog-content\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:13 crc kubenswrapper[4922]: I0218 13:04:13.039675 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkvfr\" (UniqueName: \"kubernetes.io/projected/b4669bde-5144-4129-8236-f152c6a30cad-kube-api-access-bkvfr\") pod \"redhat-operators-99kzq\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:13 crc kubenswrapper[4922]: I0218 13:04:13.079728 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:13 crc kubenswrapper[4922]: I0218 13:04:13.578737 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99kzq"] Feb 18 13:04:14 crc kubenswrapper[4922]: I0218 13:04:14.246850 4922 generic.go:334] "Generic (PLEG): container finished" podID="b4669bde-5144-4129-8236-f152c6a30cad" containerID="9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1" exitCode=0 Feb 18 13:04:14 crc kubenswrapper[4922]: I0218 13:04:14.246930 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99kzq" event={"ID":"b4669bde-5144-4129-8236-f152c6a30cad","Type":"ContainerDied","Data":"9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1"} Feb 18 13:04:14 crc kubenswrapper[4922]: I0218 13:04:14.247911 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99kzq" event={"ID":"b4669bde-5144-4129-8236-f152c6a30cad","Type":"ContainerStarted","Data":"1db80cea99a696ccd86930611cb56a401b734513da64fc07525a0f22be964c57"} Feb 18 13:04:14 crc kubenswrapper[4922]: I0218 13:04:14.249459 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 13:04:15 crc kubenswrapper[4922]: I0218 13:04:15.261929 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99kzq" event={"ID":"b4669bde-5144-4129-8236-f152c6a30cad","Type":"ContainerStarted","Data":"7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074"} Feb 18 13:04:18 crc kubenswrapper[4922]: I0218 13:04:18.309616 4922 generic.go:334] "Generic (PLEG): container finished" podID="b4669bde-5144-4129-8236-f152c6a30cad" containerID="7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074" exitCode=0 Feb 18 13:04:18 crc kubenswrapper[4922]: I0218 13:04:18.309772 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99kzq" event={"ID":"b4669bde-5144-4129-8236-f152c6a30cad","Type":"ContainerDied","Data":"7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074"} Feb 18 13:04:19 crc kubenswrapper[4922]: I0218 13:04:19.320731 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99kzq" event={"ID":"b4669bde-5144-4129-8236-f152c6a30cad","Type":"ContainerStarted","Data":"1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6"} Feb 18 13:04:19 crc kubenswrapper[4922]: I0218 13:04:19.342634 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-99kzq" podStartSLOduration=2.860000838 podStartE2EDuration="7.342618972s" podCreationTimestamp="2026-02-18 13:04:12 +0000 UTC" firstStartedPulling="2026-02-18 13:04:14.249186053 +0000 UTC m=+5255.976890133" lastFinishedPulling="2026-02-18 13:04:18.731804187 +0000 UTC m=+5260.459508267" observedRunningTime="2026-02-18 13:04:19.340493288 +0000 UTC m=+5261.068197368" watchObservedRunningTime="2026-02-18 13:04:19.342618972 +0000 UTC m=+5261.070323052" Feb 18 13:04:23 crc kubenswrapper[4922]: I0218 13:04:23.080511 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:23 crc kubenswrapper[4922]: I0218 13:04:23.082451 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:24 crc kubenswrapper[4922]: I0218 13:04:24.138135 4922 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-99kzq" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="registry-server" probeResult="failure" output=< Feb 18 13:04:24 crc kubenswrapper[4922]: timeout: failed to connect service ":50051" within 1s Feb 18 13:04:24 crc kubenswrapper[4922]: > Feb 18 13:04:33 crc kubenswrapper[4922]: I0218 13:04:33.128110 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:33 crc kubenswrapper[4922]: I0218 13:04:33.198055 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:33 crc kubenswrapper[4922]: I0218 13:04:33.375682 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99kzq"] Feb 18 13:04:34 crc kubenswrapper[4922]: I0218 13:04:34.460305 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-99kzq" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="registry-server" containerID="cri-o://1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6" gracePeriod=2 Feb 18 13:04:34 crc kubenswrapper[4922]: I0218 13:04:34.936492 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.059549 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkvfr\" (UniqueName: \"kubernetes.io/projected/b4669bde-5144-4129-8236-f152c6a30cad-kube-api-access-bkvfr\") pod \"b4669bde-5144-4129-8236-f152c6a30cad\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.059751 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-utilities\") pod \"b4669bde-5144-4129-8236-f152c6a30cad\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.059786 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-catalog-content\") pod \"b4669bde-5144-4129-8236-f152c6a30cad\" (UID: \"b4669bde-5144-4129-8236-f152c6a30cad\") " Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.061649 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-utilities" (OuterVolumeSpecName: "utilities") pod "b4669bde-5144-4129-8236-f152c6a30cad" (UID: "b4669bde-5144-4129-8236-f152c6a30cad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.074549 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4669bde-5144-4129-8236-f152c6a30cad-kube-api-access-bkvfr" (OuterVolumeSpecName: "kube-api-access-bkvfr") pod "b4669bde-5144-4129-8236-f152c6a30cad" (UID: "b4669bde-5144-4129-8236-f152c6a30cad"). InnerVolumeSpecName "kube-api-access-bkvfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.162013 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkvfr\" (UniqueName: \"kubernetes.io/projected/b4669bde-5144-4129-8236-f152c6a30cad-kube-api-access-bkvfr\") on node \"crc\" DevicePath \"\"" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.162080 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.207896 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4669bde-5144-4129-8236-f152c6a30cad" (UID: "b4669bde-5144-4129-8236-f152c6a30cad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.263502 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4669bde-5144-4129-8236-f152c6a30cad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.475105 4922 generic.go:334] "Generic (PLEG): container finished" podID="b4669bde-5144-4129-8236-f152c6a30cad" containerID="1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6" exitCode=0 Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.475188 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99kzq" event={"ID":"b4669bde-5144-4129-8236-f152c6a30cad","Type":"ContainerDied","Data":"1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6"} Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.475255 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99kzq" event={"ID":"b4669bde-5144-4129-8236-f152c6a30cad","Type":"ContainerDied","Data":"1db80cea99a696ccd86930611cb56a401b734513da64fc07525a0f22be964c57"} Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.475275 4922 scope.go:117] "RemoveContainer" containerID="1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.475211 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99kzq" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.508585 4922 scope.go:117] "RemoveContainer" containerID="7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.515035 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99kzq"] Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.523719 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-99kzq"] Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.538993 4922 scope.go:117] "RemoveContainer" containerID="9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.584379 4922 scope.go:117] "RemoveContainer" containerID="1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6" Feb 18 13:04:35 crc kubenswrapper[4922]: E0218 13:04:35.584783 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6\": container with ID starting with 1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6 not found: ID does not exist" containerID="1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.584812 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6"} err="failed to get container status \"1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6\": rpc error: code = NotFound desc = could not find container \"1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6\": container with ID starting with 1a35611b6198e4cc11556e483d246cd0ff22b99a19c84b6e5b856dac1279dfd6 not found: ID does not exist" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.584836 4922 scope.go:117] "RemoveContainer" containerID="7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074" Feb 18 13:04:35 crc kubenswrapper[4922]: E0218 13:04:35.585063 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074\": container with ID starting with 7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074 not found: ID does not exist" containerID="7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.585100 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074"} err="failed to get container status \"7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074\": rpc error: code = NotFound desc = could not find container \"7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074\": container with ID starting with 7c302066c13122506a7d14923dd931f9f8800c89244aa442754f6252fe024074 not found: ID does not exist" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.585120 4922 scope.go:117] "RemoveContainer" containerID="9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1" Feb 18 13:04:35 crc kubenswrapper[4922]: E0218 13:04:35.585534 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1\": container with ID starting with 9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1 not found: ID does not exist" containerID="9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1" Feb 18 13:04:35 crc kubenswrapper[4922]: I0218 13:04:35.585566 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1"} err="failed to get container status \"9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1\": rpc error: code = NotFound desc = could not find container \"9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1\": container with ID starting with 9d7e7f7fc79b163445197b7a284968b514d2be2abd566afbb0fe72271dff0bf1 not found: ID does not exist" Feb 18 13:04:36 crc kubenswrapper[4922]: I0218 13:04:36.987196 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4669bde-5144-4129-8236-f152c6a30cad" path="/var/lib/kubelet/pods/b4669bde-5144-4129-8236-f152c6a30cad/volumes" Feb 18 13:04:39 crc kubenswrapper[4922]: I0218 13:04:39.807014 4922 patch_prober.go:28] interesting pod/machine-config-daemon-znglx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:04:39 crc kubenswrapper[4922]: I0218 13:04:39.807296 4922 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:04:39 crc kubenswrapper[4922]: I0218 13:04:39.807374 4922 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-znglx" Feb 18 13:04:39 crc kubenswrapper[4922]: I0218 13:04:39.808148 4922 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003"} pod="openshift-machine-config-operator/machine-config-daemon-znglx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 13:04:39 crc kubenswrapper[4922]: I0218 13:04:39.808200 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerName="machine-config-daemon" containerID="cri-o://112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" gracePeriod=600 Feb 18 13:04:39 crc kubenswrapper[4922]: E0218 13:04:39.952734 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:04:40 crc kubenswrapper[4922]: I0218 13:04:40.526096 4922 generic.go:334] "Generic (PLEG): container finished" podID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" exitCode=0 Feb 18 13:04:40 crc kubenswrapper[4922]: I0218 13:04:40.526138 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerDied","Data":"112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003"} Feb 18 13:04:40 crc kubenswrapper[4922]: I0218 13:04:40.526170 4922 scope.go:117] "RemoveContainer" containerID="a3d670811b88086243e0421af6023cabe2c743133e3bc783d1903c4468a2a91f" Feb 18 13:04:40 crc kubenswrapper[4922]: I0218 13:04:40.527369 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:04:40 crc kubenswrapper[4922]: E0218 13:04:40.527747 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:04:52 crc kubenswrapper[4922]: I0218 13:04:52.974434 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:04:52 crc kubenswrapper[4922]: E0218 13:04:52.975222 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:05:04 crc kubenswrapper[4922]: I0218 13:05:04.540982 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:05:04 crc kubenswrapper[4922]: E0218 13:05:04.541926 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:05:18 crc kubenswrapper[4922]: I0218 13:05:18.994063 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:05:18 crc kubenswrapper[4922]: E0218 13:05:18.994931 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:05:31 crc kubenswrapper[4922]: I0218 13:05:31.972803 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:05:31 crc kubenswrapper[4922]: E0218 13:05:31.973544 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:05:46 crc kubenswrapper[4922]: I0218 13:05:46.973123 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:05:46 crc kubenswrapper[4922]: E0218 13:05:46.974153 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:05:48 crc kubenswrapper[4922]: I0218 13:05:48.149651 4922 generic.go:334] "Generic (PLEG): container finished" podID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerID="1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c" exitCode=0 Feb 18 13:05:48 crc kubenswrapper[4922]: I0218 13:05:48.149699 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bf65p/must-gather-pnxz8" event={"ID":"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44","Type":"ContainerDied","Data":"1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c"} Feb 18 13:05:48 crc kubenswrapper[4922]: I0218 13:05:48.150324 4922 scope.go:117] "RemoveContainer" containerID="1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c" Feb 18 13:05:48 crc kubenswrapper[4922]: I0218 13:05:48.543410 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bf65p_must-gather-pnxz8_e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44/gather/0.log" Feb 18 13:05:56 crc kubenswrapper[4922]: I0218 13:05:56.709429 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bf65p/must-gather-pnxz8"] Feb 18 13:05:56 crc kubenswrapper[4922]: I0218 13:05:56.710392 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bf65p/must-gather-pnxz8" podUID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerName="copy" containerID="cri-o://87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0" gracePeriod=2 Feb 18 13:05:56 crc kubenswrapper[4922]: I0218 13:05:56.725242 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bf65p/must-gather-pnxz8"] Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.202407 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bf65p_must-gather-pnxz8_e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44/copy/0.log" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.203690 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.236993 4922 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bf65p_must-gather-pnxz8_e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44/copy/0.log" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.237519 4922 generic.go:334] "Generic (PLEG): container finished" podID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerID="87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0" exitCode=143 Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.237581 4922 scope.go:117] "RemoveContainer" containerID="87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.237764 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bf65p/must-gather-pnxz8" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.262695 4922 scope.go:117] "RemoveContainer" containerID="1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.305858 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkczn\" (UniqueName: \"kubernetes.io/projected/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-kube-api-access-hkczn\") pod \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\" (UID: \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\") " Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.305921 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-must-gather-output\") pod \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\" (UID: \"e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44\") " Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.312435 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-kube-api-access-hkczn" (OuterVolumeSpecName: "kube-api-access-hkczn") pod "e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" (UID: "e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44"). InnerVolumeSpecName "kube-api-access-hkczn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.352323 4922 scope.go:117] "RemoveContainer" containerID="87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0" Feb 18 13:05:57 crc kubenswrapper[4922]: E0218 13:05:57.352786 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0\": container with ID starting with 87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0 not found: ID does not exist" containerID="87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.352817 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0"} err="failed to get container status \"87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0\": rpc error: code = NotFound desc = could not find container \"87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0\": container with ID starting with 87e25aa3435f2488f0e8ad59cf6c52d09f26e56dbbb0c16afa85618c9286b2b0 not found: ID does not exist" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.352837 4922 scope.go:117] "RemoveContainer" containerID="1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c" Feb 18 13:05:57 crc kubenswrapper[4922]: E0218 13:05:57.353010 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c\": container with ID starting with 1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c not found: ID does not exist" containerID="1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.353030 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c"} err="failed to get container status \"1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c\": rpc error: code = NotFound desc = could not find container \"1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c\": container with ID starting with 1a1f695d758905465b01fcc900845ef77ea0ac9586c1010946a5b06f4411a66c not found: ID does not exist" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.410633 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkczn\" (UniqueName: \"kubernetes.io/projected/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-kube-api-access-hkczn\") on node \"crc\" DevicePath \"\"" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.535541 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" (UID: "e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:05:57 crc kubenswrapper[4922]: I0218 13:05:57.614202 4922 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 13:05:58 crc kubenswrapper[4922]: I0218 13:05:58.985564 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" path="/var/lib/kubelet/pods/e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44/volumes" Feb 18 13:06:00 crc kubenswrapper[4922]: I0218 13:06:00.975673 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:06:00 crc kubenswrapper[4922]: E0218 13:06:00.976655 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:06:14 crc kubenswrapper[4922]: I0218 13:06:14.972951 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:06:14 crc kubenswrapper[4922]: E0218 13:06:14.973825 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:06:27 crc kubenswrapper[4922]: I0218 13:06:27.973523 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:06:27 crc kubenswrapper[4922]: E0218 13:06:27.974278 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:06:41 crc kubenswrapper[4922]: I0218 13:06:41.973459 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:06:41 crc kubenswrapper[4922]: E0218 13:06:41.974273 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:06:53 crc kubenswrapper[4922]: I0218 13:06:53.973717 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:06:53 crc kubenswrapper[4922]: E0218 13:06:53.974587 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:07:07 crc kubenswrapper[4922]: I0218 13:07:07.972931 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:07:07 crc kubenswrapper[4922]: E0218 13:07:07.973776 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.222444 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhb77"] Feb 18 13:07:11 crc kubenswrapper[4922]: E0218 13:07:11.223277 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="extract-content" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.223576 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="extract-content" Feb 18 13:07:11 crc kubenswrapper[4922]: E0218 13:07:11.223598 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="extract-utilities" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.223606 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="extract-utilities" Feb 18 13:07:11 crc kubenswrapper[4922]: E0218 13:07:11.223615 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerName="gather" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.223624 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerName="gather" Feb 18 13:07:11 crc kubenswrapper[4922]: E0218 13:07:11.223647 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerName="copy" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.223656 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerName="copy" Feb 18 13:07:11 crc kubenswrapper[4922]: E0218 13:07:11.223672 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="registry-server" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.223679 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="registry-server" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.223933 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4669bde-5144-4129-8236-f152c6a30cad" containerName="registry-server" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.223963 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerName="copy" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.223979 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0dbd879-002b-41f9-aa0f-dd9a7aa8fa44" containerName="gather" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.227415 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.237088 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhb77"] Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.402127 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-utilities\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.402181 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7b9\" (UniqueName: \"kubernetes.io/projected/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-kube-api-access-8h7b9\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.402521 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-catalog-content\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.504548 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-utilities\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.504613 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h7b9\" (UniqueName: \"kubernetes.io/projected/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-kube-api-access-8h7b9\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.504747 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-catalog-content\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.505172 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-utilities\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.505311 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-catalog-content\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.525963 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h7b9\" (UniqueName: \"kubernetes.io/projected/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-kube-api-access-8h7b9\") pod \"certified-operators-mhb77\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:11 crc kubenswrapper[4922]: I0218 13:07:11.552431 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:12 crc kubenswrapper[4922]: I0218 13:07:12.120030 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhb77"] Feb 18 13:07:12 crc kubenswrapper[4922]: I0218 13:07:12.934112 4922 generic.go:334] "Generic (PLEG): container finished" podID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerID="2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f" exitCode=0 Feb 18 13:07:12 crc kubenswrapper[4922]: I0218 13:07:12.934215 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhb77" event={"ID":"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c","Type":"ContainerDied","Data":"2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f"} Feb 18 13:07:12 crc kubenswrapper[4922]: I0218 13:07:12.934411 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhb77" event={"ID":"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c","Type":"ContainerStarted","Data":"9b23fc28df447b251af9fd618c70dcc3bcf509261525ed49f1de115eec8204ed"} Feb 18 13:07:14 crc kubenswrapper[4922]: I0218 13:07:14.955759 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhb77" event={"ID":"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c","Type":"ContainerStarted","Data":"d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9"} Feb 18 13:07:15 crc kubenswrapper[4922]: E0218 13:07:15.415985 4922 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45c08d45_e7a7_4df0_b6f5_bc7467e63e0c.slice/crio-conmon-d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45c08d45_e7a7_4df0_b6f5_bc7467e63e0c.slice/crio-d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9.scope\": RecentStats: unable to find data in memory cache]" Feb 18 13:07:15 crc kubenswrapper[4922]: I0218 13:07:15.969552 4922 generic.go:334] "Generic (PLEG): container finished" podID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerID="d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9" exitCode=0 Feb 18 13:07:15 crc kubenswrapper[4922]: I0218 13:07:15.969618 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhb77" event={"ID":"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c","Type":"ContainerDied","Data":"d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9"} Feb 18 13:07:16 crc kubenswrapper[4922]: I0218 13:07:16.994785 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhb77" event={"ID":"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c","Type":"ContainerStarted","Data":"1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9"} Feb 18 13:07:17 crc kubenswrapper[4922]: I0218 13:07:17.015896 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhb77" podStartSLOduration=2.3724311240000002 podStartE2EDuration="6.015880288s" podCreationTimestamp="2026-02-18 13:07:11 +0000 UTC" firstStartedPulling="2026-02-18 13:07:12.935688149 +0000 UTC m=+5434.663392229" lastFinishedPulling="2026-02-18 13:07:16.579137313 +0000 UTC m=+5438.306841393" observedRunningTime="2026-02-18 13:07:17.011787134 +0000 UTC m=+5438.739491224" watchObservedRunningTime="2026-02-18 13:07:17.015880288 +0000 UTC m=+5438.743584368" Feb 18 13:07:21 crc kubenswrapper[4922]: I0218 13:07:21.553513 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:21 crc kubenswrapper[4922]: I0218 13:07:21.554106 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:21 crc kubenswrapper[4922]: I0218 13:07:21.600560 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:22 crc kubenswrapper[4922]: I0218 13:07:22.079641 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:22 crc kubenswrapper[4922]: I0218 13:07:22.122210 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhb77"] Feb 18 13:07:22 crc kubenswrapper[4922]: I0218 13:07:22.973125 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:07:22 crc kubenswrapper[4922]: E0218 13:07:22.973720 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.048948 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhb77" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerName="registry-server" containerID="cri-o://1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9" gracePeriod=2 Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.515011 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.665765 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-utilities\") pod \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.665937 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h7b9\" (UniqueName: \"kubernetes.io/projected/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-kube-api-access-8h7b9\") pod \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.666006 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-catalog-content\") pod \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\" (UID: \"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c\") " Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.666811 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-utilities" (OuterVolumeSpecName: "utilities") pod "45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" (UID: "45c08d45-e7a7-4df0-b6f5-bc7467e63e0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.674105 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-kube-api-access-8h7b9" (OuterVolumeSpecName: "kube-api-access-8h7b9") pod "45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" (UID: "45c08d45-e7a7-4df0-b6f5-bc7467e63e0c"). InnerVolumeSpecName "kube-api-access-8h7b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.768562 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.768599 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h7b9\" (UniqueName: \"kubernetes.io/projected/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-kube-api-access-8h7b9\") on node \"crc\" DevicePath \"\"" Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.939890 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" (UID: "45c08d45-e7a7-4df0-b6f5-bc7467e63e0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:07:24 crc kubenswrapper[4922]: I0218 13:07:24.972425 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.058956 4922 generic.go:334] "Generic (PLEG): container finished" podID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerID="1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9" exitCode=0 Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.059018 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhb77" event={"ID":"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c","Type":"ContainerDied","Data":"1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9"} Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.059950 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhb77" event={"ID":"45c08d45-e7a7-4df0-b6f5-bc7467e63e0c","Type":"ContainerDied","Data":"9b23fc28df447b251af9fd618c70dcc3bcf509261525ed49f1de115eec8204ed"} Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.059049 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhb77" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.059990 4922 scope.go:117] "RemoveContainer" containerID="1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.085013 4922 scope.go:117] "RemoveContainer" containerID="d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.095253 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhb77"] Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.109338 4922 scope.go:117] "RemoveContainer" containerID="2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.121486 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhb77"] Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.159399 4922 scope.go:117] "RemoveContainer" containerID="1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9" Feb 18 13:07:25 crc kubenswrapper[4922]: E0218 13:07:25.159845 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9\": container with ID starting with 1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9 not found: ID does not exist" containerID="1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.159892 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9"} err="failed to get container status \"1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9\": rpc error: code = NotFound desc = could not find container \"1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9\": container with ID starting with 1e335fe845c10501bf69adb521d2071968fffed624df954ac9f16774cf7af8e9 not found: ID does not exist" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.159921 4922 scope.go:117] "RemoveContainer" containerID="d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9" Feb 18 13:07:25 crc kubenswrapper[4922]: E0218 13:07:25.160117 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9\": container with ID starting with d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9 not found: ID does not exist" containerID="d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.160142 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9"} err="failed to get container status \"d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9\": rpc error: code = NotFound desc = could not find container \"d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9\": container with ID starting with d91b6cea303f2de28d0a8cb55b55ce4c6b9bf9ae3bf255b5e82505529f1828d9 not found: ID does not exist" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.160157 4922 scope.go:117] "RemoveContainer" containerID="2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f" Feb 18 13:07:25 crc kubenswrapper[4922]: E0218 13:07:25.160427 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f\": container with ID starting with 2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f not found: ID does not exist" containerID="2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f" Feb 18 13:07:25 crc kubenswrapper[4922]: I0218 13:07:25.160448 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f"} err="failed to get container status \"2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f\": rpc error: code = NotFound desc = could not find container \"2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f\": container with ID starting with 2dd7ee1f953724e3c0334d3deb2d01dc2999914581eb57258d980386aa039c6f not found: ID does not exist" Feb 18 13:07:26 crc kubenswrapper[4922]: I0218 13:07:26.990023 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" path="/var/lib/kubelet/pods/45c08d45-e7a7-4df0-b6f5-bc7467e63e0c/volumes" Feb 18 13:07:37 crc kubenswrapper[4922]: I0218 13:07:37.973623 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:07:37 crc kubenswrapper[4922]: E0218 13:07:37.974386 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:07:52 crc kubenswrapper[4922]: I0218 13:07:52.973606 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:07:52 crc kubenswrapper[4922]: E0218 13:07:52.974309 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:08:07 crc kubenswrapper[4922]: I0218 13:08:07.973325 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:08:07 crc kubenswrapper[4922]: E0218 13:08:07.974482 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.813179 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2tr7d"] Feb 18 13:08:15 crc kubenswrapper[4922]: E0218 13:08:15.815150 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerName="registry-server" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.815229 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerName="registry-server" Feb 18 13:08:15 crc kubenswrapper[4922]: E0218 13:08:15.815291 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerName="extract-content" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.815352 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerName="extract-content" Feb 18 13:08:15 crc kubenswrapper[4922]: E0218 13:08:15.815437 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerName="extract-utilities" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.815498 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerName="extract-utilities" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.815737 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c08d45-e7a7-4df0-b6f5-bc7467e63e0c" containerName="registry-server" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.817273 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.831179 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tr7d"] Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.948489 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-utilities\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.948563 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-catalog-content\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:15 crc kubenswrapper[4922]: I0218 13:08:15.949092 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr4nf\" (UniqueName: \"kubernetes.io/projected/dafba0f3-3fc9-4640-9b50-ed91dacae456-kube-api-access-hr4nf\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:16 crc kubenswrapper[4922]: I0218 13:08:16.050724 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-catalog-content\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:16 crc kubenswrapper[4922]: I0218 13:08:16.050865 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr4nf\" (UniqueName: \"kubernetes.io/projected/dafba0f3-3fc9-4640-9b50-ed91dacae456-kube-api-access-hr4nf\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:16 crc kubenswrapper[4922]: I0218 13:08:16.050917 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-utilities\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:16 crc kubenswrapper[4922]: I0218 13:08:16.051340 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-catalog-content\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:16 crc kubenswrapper[4922]: I0218 13:08:16.051352 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-utilities\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:16 crc kubenswrapper[4922]: I0218 13:08:16.083148 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr4nf\" (UniqueName: \"kubernetes.io/projected/dafba0f3-3fc9-4640-9b50-ed91dacae456-kube-api-access-hr4nf\") pod \"redhat-marketplace-2tr7d\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:16 crc kubenswrapper[4922]: I0218 13:08:16.146397 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:16 crc kubenswrapper[4922]: I0218 13:08:16.599991 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tr7d"] Feb 18 13:08:17 crc kubenswrapper[4922]: I0218 13:08:17.511944 4922 generic.go:334] "Generic (PLEG): container finished" podID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerID="fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9" exitCode=0 Feb 18 13:08:17 crc kubenswrapper[4922]: I0218 13:08:17.512032 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tr7d" event={"ID":"dafba0f3-3fc9-4640-9b50-ed91dacae456","Type":"ContainerDied","Data":"fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9"} Feb 18 13:08:17 crc kubenswrapper[4922]: I0218 13:08:17.512301 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tr7d" event={"ID":"dafba0f3-3fc9-4640-9b50-ed91dacae456","Type":"ContainerStarted","Data":"92e95d0f181c491562dbf6090df7742c16451f82f80274d633c92225b2a86025"} Feb 18 13:08:18 crc kubenswrapper[4922]: I0218 13:08:18.522304 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tr7d" event={"ID":"dafba0f3-3fc9-4640-9b50-ed91dacae456","Type":"ContainerStarted","Data":"78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9"} Feb 18 13:08:19 crc kubenswrapper[4922]: I0218 13:08:19.532586 4922 generic.go:334] "Generic (PLEG): container finished" podID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerID="78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9" exitCode=0 Feb 18 13:08:19 crc kubenswrapper[4922]: I0218 13:08:19.532632 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tr7d" event={"ID":"dafba0f3-3fc9-4640-9b50-ed91dacae456","Type":"ContainerDied","Data":"78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9"} Feb 18 13:08:19 crc kubenswrapper[4922]: I0218 13:08:19.973483 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:08:19 crc kubenswrapper[4922]: E0218 13:08:19.973903 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:08:20 crc kubenswrapper[4922]: I0218 13:08:20.544143 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tr7d" event={"ID":"dafba0f3-3fc9-4640-9b50-ed91dacae456","Type":"ContainerStarted","Data":"6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf"} Feb 18 13:08:20 crc kubenswrapper[4922]: I0218 13:08:20.570403 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2tr7d" podStartSLOduration=3.156680084 podStartE2EDuration="5.570379904s" podCreationTimestamp="2026-02-18 13:08:15 +0000 UTC" firstStartedPulling="2026-02-18 13:08:17.515238933 +0000 UTC m=+5499.242943013" lastFinishedPulling="2026-02-18 13:08:19.928938753 +0000 UTC m=+5501.656642833" observedRunningTime="2026-02-18 13:08:20.560788311 +0000 UTC m=+5502.288492391" watchObservedRunningTime="2026-02-18 13:08:20.570379904 +0000 UTC m=+5502.298083994" Feb 18 13:08:26 crc kubenswrapper[4922]: I0218 13:08:26.147158 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:26 crc kubenswrapper[4922]: I0218 13:08:26.148075 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:26 crc kubenswrapper[4922]: I0218 13:08:26.190407 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:26 crc kubenswrapper[4922]: I0218 13:08:26.661206 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:26 crc kubenswrapper[4922]: I0218 13:08:26.715720 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tr7d"] Feb 18 13:08:28 crc kubenswrapper[4922]: I0218 13:08:28.627548 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2tr7d" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerName="registry-server" containerID="cri-o://6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf" gracePeriod=2 Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.090004 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.115692 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-catalog-content\") pod \"dafba0f3-3fc9-4640-9b50-ed91dacae456\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.115774 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr4nf\" (UniqueName: \"kubernetes.io/projected/dafba0f3-3fc9-4640-9b50-ed91dacae456-kube-api-access-hr4nf\") pod \"dafba0f3-3fc9-4640-9b50-ed91dacae456\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.115958 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-utilities\") pod \"dafba0f3-3fc9-4640-9b50-ed91dacae456\" (UID: \"dafba0f3-3fc9-4640-9b50-ed91dacae456\") " Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.116826 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-utilities" (OuterVolumeSpecName: "utilities") pod "dafba0f3-3fc9-4640-9b50-ed91dacae456" (UID: "dafba0f3-3fc9-4640-9b50-ed91dacae456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.117348 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.122593 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafba0f3-3fc9-4640-9b50-ed91dacae456-kube-api-access-hr4nf" (OuterVolumeSpecName: "kube-api-access-hr4nf") pod "dafba0f3-3fc9-4640-9b50-ed91dacae456" (UID: "dafba0f3-3fc9-4640-9b50-ed91dacae456"). InnerVolumeSpecName "kube-api-access-hr4nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.144601 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dafba0f3-3fc9-4640-9b50-ed91dacae456" (UID: "dafba0f3-3fc9-4640-9b50-ed91dacae456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.218910 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dafba0f3-3fc9-4640-9b50-ed91dacae456-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.218952 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr4nf\" (UniqueName: \"kubernetes.io/projected/dafba0f3-3fc9-4640-9b50-ed91dacae456-kube-api-access-hr4nf\") on node \"crc\" DevicePath \"\"" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.639267 4922 generic.go:334] "Generic (PLEG): container finished" podID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerID="6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf" exitCode=0 Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.639312 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tr7d" event={"ID":"dafba0f3-3fc9-4640-9b50-ed91dacae456","Type":"ContainerDied","Data":"6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf"} Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.639338 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tr7d" event={"ID":"dafba0f3-3fc9-4640-9b50-ed91dacae456","Type":"ContainerDied","Data":"92e95d0f181c491562dbf6090df7742c16451f82f80274d633c92225b2a86025"} Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.639344 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tr7d" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.639354 4922 scope.go:117] "RemoveContainer" containerID="6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.659164 4922 scope.go:117] "RemoveContainer" containerID="78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.681838 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tr7d"] Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.690919 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tr7d"] Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.698908 4922 scope.go:117] "RemoveContainer" containerID="fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.736941 4922 scope.go:117] "RemoveContainer" containerID="6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf" Feb 18 13:08:29 crc kubenswrapper[4922]: E0218 13:08:29.737390 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf\": container with ID starting with 6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf not found: ID does not exist" containerID="6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.737432 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf"} err="failed to get container status \"6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf\": rpc error: code = NotFound desc = could not find container \"6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf\": container with ID starting with 6b04406f1f39eb78dd15003661715a98b5b5b4893d808592949d3621c656e8bf not found: ID does not exist" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.737458 4922 scope.go:117] "RemoveContainer" containerID="78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9" Feb 18 13:08:29 crc kubenswrapper[4922]: E0218 13:08:29.737721 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9\": container with ID starting with 78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9 not found: ID does not exist" containerID="78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.737750 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9"} err="failed to get container status \"78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9\": rpc error: code = NotFound desc = could not find container \"78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9\": container with ID starting with 78f78102af1e4ae9d9c1f68a59aa0fdad955fdc330d38dc31f6af099cc297af9 not found: ID does not exist" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.737764 4922 scope.go:117] "RemoveContainer" containerID="fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9" Feb 18 13:08:29 crc kubenswrapper[4922]: E0218 13:08:29.737996 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9\": container with ID starting with fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9 not found: ID does not exist" containerID="fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9" Feb 18 13:08:29 crc kubenswrapper[4922]: I0218 13:08:29.738018 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9"} err="failed to get container status \"fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9\": rpc error: code = NotFound desc = could not find container \"fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9\": container with ID starting with fed158d3668fe2c8d8f255ced66c9052205f77cd8cc2ad5b3e580da203887ed9 not found: ID does not exist" Feb 18 13:08:30 crc kubenswrapper[4922]: I0218 13:08:30.973535 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:08:30 crc kubenswrapper[4922]: E0218 13:08:30.974092 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:08:30 crc kubenswrapper[4922]: I0218 13:08:30.984933 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" path="/var/lib/kubelet/pods/dafba0f3-3fc9-4640-9b50-ed91dacae456/volumes" Feb 18 13:08:45 crc kubenswrapper[4922]: I0218 13:08:45.973430 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:08:45 crc kubenswrapper[4922]: E0218 13:08:45.974437 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:08:59 crc kubenswrapper[4922]: I0218 13:08:59.973675 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:08:59 crc kubenswrapper[4922]: E0218 13:08:59.974382 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.246959 4922 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pw2bz"] Feb 18 13:09:10 crc kubenswrapper[4922]: E0218 13:09:10.248961 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerName="extract-utilities" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.249041 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerName="extract-utilities" Feb 18 13:09:10 crc kubenswrapper[4922]: E0218 13:09:10.249142 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerName="extract-content" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.249205 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerName="extract-content" Feb 18 13:09:10 crc kubenswrapper[4922]: E0218 13:09:10.249265 4922 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerName="registry-server" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.249317 4922 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerName="registry-server" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.249560 4922 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafba0f3-3fc9-4640-9b50-ed91dacae456" containerName="registry-server" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.251236 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.291411 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pw2bz"] Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.418329 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-catalog-content\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.418709 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2tc\" (UniqueName: \"kubernetes.io/projected/b0615d08-764f-4a22-8877-94c6e23119ef-kube-api-access-wb2tc\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.418778 4922 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-utilities\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.521232 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2tc\" (UniqueName: \"kubernetes.io/projected/b0615d08-764f-4a22-8877-94c6e23119ef-kube-api-access-wb2tc\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.521671 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-utilities\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.521925 4922 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-catalog-content\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.522134 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-utilities\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.522290 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-catalog-content\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.541280 4922 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2tc\" (UniqueName: \"kubernetes.io/projected/b0615d08-764f-4a22-8877-94c6e23119ef-kube-api-access-wb2tc\") pod \"community-operators-pw2bz\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:10 crc kubenswrapper[4922]: I0218 13:09:10.596323 4922 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:11 crc kubenswrapper[4922]: I0218 13:09:11.088253 4922 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pw2bz"] Feb 18 13:09:12 crc kubenswrapper[4922]: I0218 13:09:12.009250 4922 generic.go:334] "Generic (PLEG): container finished" podID="b0615d08-764f-4a22-8877-94c6e23119ef" containerID="908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74" exitCode=0 Feb 18 13:09:12 crc kubenswrapper[4922]: I0218 13:09:12.009441 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2bz" event={"ID":"b0615d08-764f-4a22-8877-94c6e23119ef","Type":"ContainerDied","Data":"908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74"} Feb 18 13:09:12 crc kubenswrapper[4922]: I0218 13:09:12.009975 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2bz" event={"ID":"b0615d08-764f-4a22-8877-94c6e23119ef","Type":"ContainerStarted","Data":"e7c03e13078ba65076c28d4d878840509f8443583976608c7c256183fc823241"} Feb 18 13:09:13 crc kubenswrapper[4922]: I0218 13:09:13.020675 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2bz" event={"ID":"b0615d08-764f-4a22-8877-94c6e23119ef","Type":"ContainerStarted","Data":"65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b"} Feb 18 13:09:14 crc kubenswrapper[4922]: I0218 13:09:14.973480 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:09:14 crc kubenswrapper[4922]: E0218 13:09:14.974705 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:09:15 crc kubenswrapper[4922]: I0218 13:09:15.038101 4922 generic.go:334] "Generic (PLEG): container finished" podID="b0615d08-764f-4a22-8877-94c6e23119ef" containerID="65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b" exitCode=0 Feb 18 13:09:15 crc kubenswrapper[4922]: I0218 13:09:15.038147 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2bz" event={"ID":"b0615d08-764f-4a22-8877-94c6e23119ef","Type":"ContainerDied","Data":"65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b"} Feb 18 13:09:15 crc kubenswrapper[4922]: I0218 13:09:15.041452 4922 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 13:09:16 crc kubenswrapper[4922]: I0218 13:09:16.048694 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2bz" event={"ID":"b0615d08-764f-4a22-8877-94c6e23119ef","Type":"ContainerStarted","Data":"98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e"} Feb 18 13:09:16 crc kubenswrapper[4922]: I0218 13:09:16.070719 4922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pw2bz" podStartSLOduration=2.652823639 podStartE2EDuration="6.070693909s" podCreationTimestamp="2026-02-18 13:09:10 +0000 UTC" firstStartedPulling="2026-02-18 13:09:12.013297487 +0000 UTC m=+5553.741001567" lastFinishedPulling="2026-02-18 13:09:15.431167757 +0000 UTC m=+5557.158871837" observedRunningTime="2026-02-18 13:09:16.06560971 +0000 UTC m=+5557.793313790" watchObservedRunningTime="2026-02-18 13:09:16.070693909 +0000 UTC m=+5557.798397989" Feb 18 13:09:20 crc kubenswrapper[4922]: I0218 13:09:20.597511 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:20 crc kubenswrapper[4922]: I0218 13:09:20.598062 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:20 crc kubenswrapper[4922]: I0218 13:09:20.643620 4922 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:21 crc kubenswrapper[4922]: I0218 13:09:21.146398 4922 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:21 crc kubenswrapper[4922]: I0218 13:09:21.198840 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pw2bz"] Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.110267 4922 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pw2bz" podUID="b0615d08-764f-4a22-8877-94c6e23119ef" containerName="registry-server" containerID="cri-o://98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e" gracePeriod=2 Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.573311 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.680521 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-catalog-content\") pod \"b0615d08-764f-4a22-8877-94c6e23119ef\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.680781 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb2tc\" (UniqueName: \"kubernetes.io/projected/b0615d08-764f-4a22-8877-94c6e23119ef-kube-api-access-wb2tc\") pod \"b0615d08-764f-4a22-8877-94c6e23119ef\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.680855 4922 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-utilities\") pod \"b0615d08-764f-4a22-8877-94c6e23119ef\" (UID: \"b0615d08-764f-4a22-8877-94c6e23119ef\") " Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.682385 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-utilities" (OuterVolumeSpecName: "utilities") pod "b0615d08-764f-4a22-8877-94c6e23119ef" (UID: "b0615d08-764f-4a22-8877-94c6e23119ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.687228 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0615d08-764f-4a22-8877-94c6e23119ef-kube-api-access-wb2tc" (OuterVolumeSpecName: "kube-api-access-wb2tc") pod "b0615d08-764f-4a22-8877-94c6e23119ef" (UID: "b0615d08-764f-4a22-8877-94c6e23119ef"). InnerVolumeSpecName "kube-api-access-wb2tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.737650 4922 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0615d08-764f-4a22-8877-94c6e23119ef" (UID: "b0615d08-764f-4a22-8877-94c6e23119ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.783109 4922 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.783563 4922 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb2tc\" (UniqueName: \"kubernetes.io/projected/b0615d08-764f-4a22-8877-94c6e23119ef-kube-api-access-wb2tc\") on node \"crc\" DevicePath \"\"" Feb 18 13:09:23 crc kubenswrapper[4922]: I0218 13:09:23.783693 4922 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0615d08-764f-4a22-8877-94c6e23119ef-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.123721 4922 generic.go:334] "Generic (PLEG): container finished" podID="b0615d08-764f-4a22-8877-94c6e23119ef" containerID="98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e" exitCode=0 Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.123779 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2bz" event={"ID":"b0615d08-764f-4a22-8877-94c6e23119ef","Type":"ContainerDied","Data":"98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e"} Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.123825 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw2bz" event={"ID":"b0615d08-764f-4a22-8877-94c6e23119ef","Type":"ContainerDied","Data":"e7c03e13078ba65076c28d4d878840509f8443583976608c7c256183fc823241"} Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.123847 4922 scope.go:117] "RemoveContainer" containerID="98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.123919 4922 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw2bz" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.145277 4922 scope.go:117] "RemoveContainer" containerID="65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.157647 4922 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pw2bz"] Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.167026 4922 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pw2bz"] Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.189121 4922 scope.go:117] "RemoveContainer" containerID="908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.208298 4922 scope.go:117] "RemoveContainer" containerID="98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e" Feb 18 13:09:24 crc kubenswrapper[4922]: E0218 13:09:24.208714 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e\": container with ID starting with 98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e not found: ID does not exist" containerID="98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.208767 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e"} err="failed to get container status \"98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e\": rpc error: code = NotFound desc = could not find container \"98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e\": container with ID starting with 98c26d86df78b4f621fa13bc0f508b740a3e9be7d2950a117cc1d052828a4b5e not found: ID does not exist" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.208798 4922 scope.go:117] "RemoveContainer" containerID="65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b" Feb 18 13:09:24 crc kubenswrapper[4922]: E0218 13:09:24.209060 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b\": container with ID starting with 65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b not found: ID does not exist" containerID="65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.209088 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b"} err="failed to get container status \"65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b\": rpc error: code = NotFound desc = could not find container \"65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b\": container with ID starting with 65525613a226afeeb976852cdbbb3222a7eb669500cd277dc78ac0385c20605b not found: ID does not exist" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.209105 4922 scope.go:117] "RemoveContainer" containerID="908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74" Feb 18 13:09:24 crc kubenswrapper[4922]: E0218 13:09:24.209382 4922 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74\": container with ID starting with 908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74 not found: ID does not exist" containerID="908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.209407 4922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74"} err="failed to get container status \"908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74\": rpc error: code = NotFound desc = could not find container \"908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74\": container with ID starting with 908890a96584ac69af3d7a4795a02523328fd40c50d264b71e0dd53105fb4a74 not found: ID does not exist" Feb 18 13:09:24 crc kubenswrapper[4922]: I0218 13:09:24.987881 4922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0615d08-764f-4a22-8877-94c6e23119ef" path="/var/lib/kubelet/pods/b0615d08-764f-4a22-8877-94c6e23119ef/volumes" Feb 18 13:09:28 crc kubenswrapper[4922]: I0218 13:09:28.980716 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:09:28 crc kubenswrapper[4922]: E0218 13:09:28.981299 4922 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-znglx_openshift-machine-config-operator(fdb7cedc-b2e3-48f0-80e0-e17073b43228)\"" pod="openshift-machine-config-operator/machine-config-daemon-znglx" podUID="fdb7cedc-b2e3-48f0-80e0-e17073b43228" Feb 18 13:09:41 crc kubenswrapper[4922]: I0218 13:09:41.973831 4922 scope.go:117] "RemoveContainer" containerID="112ba381fc4b9cc3120f9b69127c4ddbe0d5c14674a4b80d3ac1d60cc8c37003" Feb 18 13:09:42 crc kubenswrapper[4922]: I0218 13:09:42.290807 4922 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-znglx" event={"ID":"fdb7cedc-b2e3-48f0-80e0-e17073b43228","Type":"ContainerStarted","Data":"d5a75f5dffb9bb27b6966398e4cdc574eee1f469794b1d2baf0992bcc03c6b7b"}